MRAC: Multimodal, Generative and Responsible Affective Computing (ACM-MM 2024)
7 minutes
Introduction #
Affective Computing involves the creation, evaluation and deployment of Emotion AI and Affective technologies to make people’s lives better. The creation, evaluation and deployment stages of the emotion-ai model require large amounts of multimodal data from RGB images to video, audio, text, and physiological signals. In principle, the development of any AI system must be guided by a concern for its human impact. The aim should be striving to augment and enhance humans, not replace humans; while taking inspiration from human intelligence, safely. To this end, the MRAC 2024 workshop aims to transfer the same concepts from a small-scale, lab-based environment to a real-world, large-scale corpus enhanced with responsibility. The workshop also aims to bring to the attention of researchers and industry professionals of the potential implications of generative technology along with its ethical consequences.
Call for Contributions #
Full Workshop Papers #
The 2nd International Workshop on Multimodal, Generative and Responsible Affective Computing (MRAC 2024) at ACM-MM 2024 (track for Multimodal and Responsible Affective Computing) aims to encourage and highlight novel strategies for affective phenomena estimation and prediction with a focus on robustness and accuracy in extended parameter spaces, spatially, temporally, spatio-temporally and most importantly Responsibly. This is expected to be achieved by applying novel neural network architectures, generative ai, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training. Specifically, the workshop topics include (but are not limited to):
- Large scale data generation or Inexpensive annotation for Affective Computing
- Generative AI for Affective Computing using multimodal signals
- Multi-modal method for emotion recognition
- Privacy preserving large scale emotion recognition in the wild
- Generative aspects of affect analysis
- Deepfake generation, detection and temporal deepfake localization
- Multimodal data analysis
- Affective Computing Applications in education, entertainment & healthcare
- Explainable or Privacy Preserving AI in affective computing
- Generative and responsible personalization of affective phenomena estimators with few-shot learning
- Bias in affective computing data (e.g. lack of multi-cultural datasets)
- Semi-/weak-/un-/self- supervised learning methods, domain adaptation methods, and other novel methods for Affective Computing
We will be accepting the submission of full unpublished and original papers. These papers will be peer-reviewed via a double-blind process, and will be published in the official workshop proceedings and be presented at the workshop itself.
Submission #
We invite authors to submit unpublished papers (ACM-MM format) to our workshop, to be presented at an oral/poster session upon acceptance. All submissions will go through a double-blind review process. All contributions must be submitted (along with supplementary materials, if any) at the OpenReview. Accepted papers will be published in the official ACM-MM Workshops proceedings.
Workshop full papers: 8 page limit + 2 extra pages for references only
Workshop short papers: 4 page limit + 1 extra page for references only
Note #
Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers’ comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.
Important Dates #
Paper Submission Deadline | July 25, 2024 (12:00 Pacific time) |
Notification to Authors | Aug 9, 2024 |
Camera-Ready Deadline | Aug 15, 2024 (12:00 Pacific time) |
Registration #
Workshop registration will be handled by the ACM-MM-2024 main conference committee. Please follow the ACM-MM-2024 website for related information.
Presentation Instructions #
Please prepare a 15-minute presentation for your accepted paper (12-minute presentation and 3-min Q&A).
Please ensure that any presentation slides you are using for your presentation are in PowerPoint format. The slide aspect ratio for your presentation should be set to 16:9. To change the aspect ratio on PPT go to the ‘Design’ tab at the top then select ‘Slide Size’ (usually located on the far right) here you will find the ratio options.
Please send a copy of your presentation to shreya.ghosh@curtin.edu.au and zhixi.cai@monash.edu. You will need to bring your PowerPoint presentation on a USB with you to the Conference. If you have any video files in your presentation, please have these files saved separately on your USB.
Workshop Schedule #
Friday, Nov 1st #
Time zone: AEDT GMT + 11
Location: Meeting Room 217, Level 2 of Melbourne Convention and Exhibition Centre (MCEC)
1 Convention Centre Place, South Wharf VIC 3006
More details: ACM MM full-program page
09:00am - 09:05am | Opening and welcome |
09:05am - 10:00am | Keynote 1: Seeing in 3D: Assistive Robotics with Advanced Computer Vision by Prof. Mohammed Bennamoun |
10:00am - 10:15am | Paper 1: THE-FD: Task Hierarchical Emotion-aware for Fake Detection |
10:15am - 10:30am | Paper 2: Are You Paying Attention? Multimodal Linear Attention Transformers for Affect Prediction in Video Conversations |
10:30am - 11:00am | Break (Morning Tea) |
11:00am - 12:00pm | Keynote 2: Wearable Sensing for Longitudinal Automatic Task Analysis by Prof. Julien Epps |
12:00pm - 12:15pm | Paper 3: W-TDL: Window-Based Temporal Deepfake Localization |
12:15pm - 12:30pm | Paper 4: Can Expression Sensitivity Improve Macro- and Micro-Expression Spotting in Long Videos? |
12:30pm - 12:35pm | Closing Remarks |
Invited Keynote Speakers #
University of New South Wales
Biography: Prof. Julien Epps is Professor in Digital Signal Processing and Dean of Engineering at The University of New South Wales, Sydney, Australia, where he was previously Head of School of the School of Electrical Engineering and Telecommunications. He also have an appointment as a Co-Director of the NSW Smart Sensing Network. He is a Scientific Advisor for Boston-based startup Sonde Health, where he have worked on speech-based assessment of mental health, and have held an appointment as a Contributed Principal Researcher with Data61, CSIRO, where he worked on methods for automatic task analysis using behavioural and physiological signals. A passionate educator, he is an Emeritus Fellow of the UNSW Scientia Education Academy. His research interests also include applications of speech modelling and processing, in particular to emotion and mental state recognition from speech and signals from wearable sensors. He has also worked on genomic sequence processing and aspects of human-computer interaction, including multimodal interfaces and computer-supported cooperative work.University of Western Australia
Biography: Prof. Mohammed Bennamoun is currently a Winthrop Professor at the University of Western Australia. He served as the Head of the School of Computer Science and Software Engineering at UWA for five years (February 2007-March 2012). He was an Erasmus Mundus Scholar and Visiting Professor in 2006 at the University of Edinburgh. He was also Visiting Professor at CNRS (Centre National de la Recherche Scientifique) and Telecom Lille1, France in 2009, the Helsinki University of Technology in 2006, and the University of Bourgogne and Paris 13 in France in 2002-2003. He won the UWA Vice-Chancellor’s Research Mentorship Award in 2016. He also won the UWA Award for Teaching Excellence for Research Supervision in 2016. He was congratulated for his “outstanding contributions and for going above and beyond, to inspire, support and educate”. He won the “Best Supervisor of the Year” Award at QUT. He also received an award for research supervision at UWA in 2008. His areas of interest include computer vision (particularly 3D) e.g., object recognition & biometrics; machine/deep learning; robotics (e.g., obstacle avoidance and robot grasping); signal/image processing; control theory.Organizers #
Curtin University
Monash University
Flinders University
Queen Mary University of London
UNSW Canberra
Curtin University
Program Committee #
IIIT Delhi
KTH Sweden
Monash University
IIT Guwahati
Griffith University
Curtin University
Curtin University
IIT Ropar
IIT Ropar
Monash University
Contact #
Please contact us if you have any questions.
Email: shreya.ghosh@curtin.edu.au, Zhixi.Cai@monash.edu, abhinav.dhall@flinders.edu.au
Image Source: Wall-E