Research Output
Multimodal Emotion Recognition from Art Using Sequential Co-Attention
  In this study, we present a multimodal emotion recognition architecture that uses both feature-level attention (sequential co-attention) and modality attention (weighted modality fusion) to classify emotion in art. The proposed architecture helps the model to focus on learning informative and refined representations for both feature extraction and modality fusion. The resulting system can be used to categorize artworks according to the emotions they evoke; recommend paintings that accentuate or balance a particular mood; search for paintings of a particular style or genre that represents custom content in a custom state of impact. Experimental results on the WikiArt emotion dataset showed the efficiency of the approach proposed and the usefulness of three modalities in emotion recognition.

  • Type:

    Article

  • Date:

    21 August 2021

  • Publication Status:

    Published

  • Publisher

    MDPI AG

  • DOI:

    10.3390/jimaging7080157

  • ISSN:

    2313-433X

  • Funders:

    Historic Funder (pre-Worktribe)

Citation

Tashu, T. M., Hajiyeva, S., & Horvath, T. (2021). Multimodal Emotion Recognition from Art Using Sequential Co-Attention. Journal of Imaging, 7(8), Article 157. https://doi.org/10.3390/jimaging7080157

Authors

Keywords

multimodal; emotions; attention; art; modality fusion; emotion analysis

Monthly Views:

Available Documents