MRAC: Multimodal, Generative and Responsible Affective Computing (ACM-MM 2025)
3 minutes
Introduction #
Affective Computing involves the creation, evaluation and deployment of Emotion AI and Affective technologies to make people’s lives better. The creation, evaluation and deployment stages of the emotion-ai model require large amounts of multimodal data from RGB images to video, audio, text, and physiological signals. In principle, the development of any AI system must be guided by a concern for its human impact. The aim should be striving to augment and enhance humans, not replace humans; while taking inspiration from human intelligence, safely. To this end, the MRAC 2024 workshop aims to transfer the same concepts from a small-scale, lab-based environment to a real-world, large-scale corpus enhanced with responsibility. The workshop also aims to bring to the attention of researchers and industry professionals of the potential implications of generative technology along with its ethical consequences.
Call for Contributions #
Full Workshop Papers #
The 3rd International Workshop on Multimodal, Generative and Responsible Affective Computing (MRAC 2025) at ACM-MM 2025 (track for Multimodal and Responsible Affective Computing) aims to encourage and highlight novel strategies for affective phenomena estimation and prediction with a focus on robustness and accuracy in extended parameter spaces, spatially, temporally, spatio-temporally and most importantly Responsibly. This is expected to be achieved by applying novel neural network architectures, generative ai, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training. Specifically, the workshop topics include (but are not limited to):
- Large scale data generation or Inexpensive annotation for Affective Computing
- Generative AI for Affective Computing using multimodal signals
- Multi-modal method for emotion recognition
- Privacy preserving large scale emotion recognition in the wild
- Generative aspects of affect analysis
- Deepfake generation, detection and temporal deepfake localization
- Multimodal data analysis
- Affective Computing Applications in education, entertainment & healthcare
- Explainable or Privacy Preserving AI in affective computing
- Generative and responsible personalization of affective phenomena estimators with few-shot learning
- Bias in affective computing data (e.g. lack of multi-cultural datasets)
- Semi-/weak-/un-/self- supervised learning methods, domain adaptation methods, and other novel methods for Affective Computing
We will be accepting the submission of full unpublished and original papers. These papers will be peer-reviewed via a double-blind process, and will be published in the official workshop proceedings and be presented at the workshop itself.
Submission #
We invite authors to submit unpublished papers (ACM-MM format) to our workshop, to be presented at an oral/poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at the OpenReview. Accepted papers will be published in the official ACM-MM Workshops proceedings.
Workshop full papers: 8 page limit + 2 extra pages for references only
Workshop short papers: 4 page limit + 1 extra page for references only
Note #
Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers’ comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.
Important Dates #
Paper Submission Deadline | July 25, 2025 (12:00 Pacific time) |
Notification to Authors | Aug 9, 2025 |
Camera-Ready Deadline | Aug 15, 2025 (12:00 Pacific time) |
Organizers #

Curtin University

Monash University

Flinders University

UNSW Canberra

Curtin University
Contact #
Please contact us if you have any questions.
Email: shreya.ghosh@curtin.edu.au, Zhixi.Cai@monash.edu, abhinav.dhall@flinders.edu.au
Image Source: Wall-E