The PyTorch code for paper: CONSK-GCN: Conversational Semantic- and Knowledge-Oriented Graph Convolutional Network for Multimodal Emotion Recognition [PDF].
Context- and Knowledge-Aware Graph Convolutional Network for Multimodal Emotion Recognition[PDF].
The code is based on DialogueGCN.
Knowledge preparation(You can skip this step by using the already preprocessed knowledges in the Data directory):
- Download ConceptNet and NRC_VAD.
- preprocess ConceptNet and NRC_VAD: run
preprocess_knowledge.py
.
Model training: run
train_multi.py
for both IEMOCAP and MELD datasets.
If you find this repo or paper useful, please cite
@article{fu2022context,
title={Context-and Knowledge-Aware Graph Convolutional Network for Multimodal Emotion Recognition},
author={Fu, Yahui and Okada, Shogo and Wang, Longbiao and Guo, Lili and Liu, Jiaxing and Song, Yaodong and Dang, Jianwu},
journal={IEEE MultiMedia},
year={2022},
publisher={IEEE}
}