ORCID Profile
0000-0002-5699-0176
Current Organisation
Zhengzhou University of Light Industry
Does something not look right? The information on this page has been harvested from data sources that may not be up to date. We continue to work with information providers to improve coverage and quality. To report an issue, use the Feedback Form.
Publisher: Association for Computing Machinery (ACM)
Date: 31-10-2022
DOI: 10.1145/3527175
Abstract: The recent booming of artificial intelligence (AI) applications, e.g., affective robots, human-machine interfaces, autonomous vehicles, and so on, has produced a great number of multi-modal records of human communication. Such data often carry latent subjective users’ attitudes and opinions, which provides a practical and feasible path to realize the connection between human emotion and intelligence services. Sentiment and emotion analysis of multi-modal records is of great value to improve the intelligence level of affective services. However, how to find an optimal manner to learn people’s sentiments and emotional representations has been a difficult problem, since both of them involve subtle mind activity. To solve this problem, a lot of approaches have been published, but most of them are insufficient to mine sentiment and emotion, since they have treated sentiment analysis and emotion recognition as two separate tasks. The interaction between them has been neglected, which limits the efficiency of sentiment and emotion representation learning. In this work, emotion is seen as the external expression of sentiment, while sentiment is the essential nature of emotion. We thus argue that they are strongly related to each other where one’s judgment helps the decision of the other. The key challenges are multi-modal fused representation and the interaction between sentiment and emotion. To solve such issues, we design an external knowledge enhanced multi-task representation learning network, termed KAMT. The major elements contain two attention mechanisms, which are inter-modal and inter-task attentions and an external knowledge augmentation layer. The external knowledge augmentation layer is used to extract the vector of the participant’s gender, age, occupation, and of overall color or shape. The main use of inter-modal attention is to capture effective multi-modal fused features. Inter-task attention is designed to model the correlation between sentiment analysis and emotion classification. We perform experiments on three widely used datasets, and the experimental performance proves the effectiveness of the KAMT model.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 08-2023
Publisher: Elsevier BV
Date: 05-2023
Publisher: Elsevier BV
Date: 2021
Publisher: Association for Computing Machinery (ACM)
Date: 18-05-2023
DOI: 10.1145/3533430
Abstract: Computational Linguistics (CL) associated with the Internet of Multimedia Things (IoMT)-enabled multimedia computing applications brings several research challenges, such as real-time speech understanding, deep fake video detection, emotion recognition, home automation, and so on. Due to the emergence of machine translation, CL solutions have increased tremendously for different natural language processing (NLP) applications. Nowadays, NLP-enabled IoMT is essential for its success. Sarcasm detection, a recently emerging artificial intelligence (AI) and NLP task, aims at discovering sarcastic, ironic, and metaphoric information implied in texts that are generated in the IoMT. It has drawn much attention from the AI and IoMT research community. The advance of sarcasm detection and NLP techniques will provide a cost-effective, intelligent way to work together with machine devices and high-level human-to-device interactions. However, existing sarcasm detection approaches neglect the hidden stance behind texts, thus insufficient to exploit the full potential of the task. Indeed, the stance, i.e., whether the author of a text is in favor of, against, or neutral toward the proposition or target talked in the text, largely determines the text’s actual sarcasm orientation. To fill the gap, in this research, we propose a new task: stance-level sarcasm detection (SLSD), where the goal is to uncover the author’s latent stance and based on it to identify the sarcasm polarity expressed in the text. We then propose an integral framework, which consists of Bidirectional Encoder Representations from Transformers (BERT) and a novel stance-centered graph attention networks (SCGAT). Specifically, BERT is used to capture the sentence representation, and SCGAT is designed to capture the stance information on specific target. Extensive experiments are conducted on a Chinese sarcasm sentiment dataset we created and the SemEval-2018 Task 3 English sarcasm dataset. The experimental results prove the effectiveness of the SCGAT framework over state-of-the-art baselines by a large margin.
Publisher: Association for Computing Machinery (ACM)
Date: 21-08-2023
DOI: 10.1145/3593583
Abstract: Sentiment and emotion, which correspond to long-term and short-lived human feelings, are closely linked to each other, leading to the fact that sentiment analysis and emotion recognition are also two interdependent tasks in natural language processing (NLP). One task often leverages the shared knowledge from another task and performs better when solved in a joint learning paradigm. Conversational context dependency, multi-modal interaction, and multi-task correlation are three key factors that contribute to this joint paradigm. However, none of the recent approaches have considered them in a unified framework. To fill this gap, we propose a multi-modal, multi-task interactive graph attention network, termed M3GAT, to simultaneously solve the three problems. At the heart of the model is a proposed interactive conversation graph layer containing three core sub-modules, which are: (1) local-global context connection for modeling both local and global conversational context, (2) cross-modal connection for learning multi-modal complementary and (3) cross-task connection for capturing the correlation across two tasks. Comprehensive experiments on three benchmarking datasets, MELD, MEISD, and MSED, show the effectiveness of M3GAT over state-of-the-art baselines with the margin of 1.88%, 5.37%, and 0.19% for sentiment analysis, and 1.99%, 3.65%, and 0.13% for emotion recognition, respectively. In addition, we also show the superiority of multi-task learning over the single-task framework.
Publisher: Institute of Electrical and Electronics Engineers (IEEE)
Date: 12-2021
Publisher: Association for Computing Machinery (ACM)
Date: 26-08-2023
DOI: 10.1145/3604550
Abstract: Quantum theory, originally proposed as a physical theory to describe the motions of microscopic particles, has been applied to various non-physics domains involving human cognition and decision-making that are inherently uncertain and exhibit certain non-classical, quantum-like characteristics. Sentiment analysis is a typical ex le of such domains. In the last few years, by leveraging the modeling power of quantum probability (a non-classical probability stemming from quantum mechanics methodology) and deep neural networks, a range of novel quantum-cognitively inspired models for sentiment analysis have emerged and performed well. This survey presents a timely overview of the latest developments in this fascinating cross-disciplinary area. We first provide a background of quantum probability and quantum cognition at a theoretical level, analyzing their advantages over classical theories in modeling the cognitive aspects of sentiment analysis. Then, recent quantum-cognitively inspired models are introduced and discussed in detail, focusing on how they approach the key challenges of the sentiment analysis task. Finally, we discuss the limitations of the current research and highlight future research directions.
Publisher: MDPI AG
Date: 21-10-2021
Abstract: In the recent pandemic, accurate and rapid testing of patients remained a critical task in the diagnosis and control of COVID-19 disease spread in the healthcare industry. Because of the sudden increase in cases, most countries have faced scarcity and a low rate of testing. Chest X-rays have been shown in the literature to be a potential source of testing for COVID-19 patients, but manually checking X-ray reports is time-consuming and error-prone. Considering these limitations and the advancements in data science, we proposed a Vision Transformer-based deep learning pipeline for COVID-19 detection from chest X-ray-based imaging. Due to the lack of large data sets, we collected data from three open-source data sets of chest X-ray images and aggregated them to form a 30 K image data set, which is the largest publicly available collection of chest X-ray images in this domain to our knowledge. Our proposed transformer model effectively differentiates COVID-19 from normal chest X-rays with an accuracy of 98% along with an AUC score of 99% in the binary classification task. It distinguishes COVID-19, normal, and pneumonia patient’s X-rays with an accuracy of 92% and AUC score of 98% in the Multi-class classification task. For evaluation on our data set, we fine-tuned some of the widely used models in literature, namely, EfficientNetB0, InceptionV3, Resnet50, MobileNetV3, Xception, and DenseNet-121, as baselines. Our proposed transformer model outperformed them in terms of all metrics. In addition, a Grad-CAM based visualization is created which makes our approach interpretable by radiologists and can be used to monitor the progression of the disease in the affected lungs, assisting healthcare.
No related grants have been discovered for Yazhou Zhang.