A2I: Affective Artificial Intelligence (ICPR 2024)
11 minutes
Introduction #
The workshop on Affective Artificial Intelligence (A2I) at ICPR 2024 aims to encourage and highlight state-of-the-art research in affective computing and applications. The key focuses are on novel neural network architectures, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training.
Call for Contributions #
Full Workshop Papers #
The workshop topics include (but are not limited to):
- Large-scale data generation or Inexpensive annotation for Affective Computing
- AI methods for Affective Computing with multimodal data
- Multi-modal method for emotion recognition
- Explainable and/or Privacy Preserving AI in affective computing
- Generative and responsible personalization of affective phenomena estimators with few-shot learning
- Bias in affective computing data (e.g. lack of multi-cultural datasets)
- Semi-/weak-/un-/self- supervised learning methods, domain adaptation methods, and other novel methods for Affective Computing
- Applications in education, entertainment & healthcare
We will be accepting the submission of full unpublished and original papers. These papers will be peer-reviewed via a single-blind process. The accepted papers in the workshops will be published in Lecture Notes in Computer Science (LNCS), Springer (https://www.springer.com/gp/computer-science/lncs). Please note that it will be published after the workshop.
Submission #
We invite authors to submit unpublished papers (15 pages including references ICPR format) to our workshop, to be presented at an oral/poster session upon acceptance. All submissions will go through a single-blind review process. All contributions must be submitted (along with supplementary materials, if any) at the Microsoft CMT.
Note #
Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers’ comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.
Important Dates #
Paper Submission Deadline | |
Notification to Authors | September 15, 2024 (11:55 pm Anywhere on Earth) |
Camera-Ready Deadline | Please note that you need to upload your camera-ready version and copyright of your paper in a link provided by Springer. Springer will communicate with the contact author directly via email. (This is expected around first or second week of Nov 2024. We will update you around this time.) |
Workshop Schedule #
TBD
Invited Keynote Speakers #
University of Maryland
Title: Multimodal and Context-Aware Emotion PerceptionAbstarct: Human emotion perception is integral to intelligent systems' wide applications, including behavior prediction, social robotics, medicine, surveillance, and entertainment. Current literature advocates that humans perceive emotions and behavior from various human modalities and situational and background contexts. Our research focuses on this aspect of emotion perception, as we attempt to build emotion perception models from multiple modalities and contextual cues and use such ideas of perception for various real-world domains of AI applications. We will go over both parts in this talk. In the first part, we will explore two approaches to improve emotion perception models. In one approach, we will leverage the idea of using more than one modality to perceive human emotion. In the other approach, we leverage contextual information; background scene, multiple modalities of the human subject, and socio-dynamic inter-agent interactions available in the input to predict the perceived emotion. In the second part, we will explore three domains of AI applications; i) video manipulations and deepfake detection, ii) multimedia content analysis, and iii) social media interactions investigation to enrich solutions with ideas from emotion perception.
Biography: Prof. Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and a Distinguished University Professor at the University of Maryland College Park. His research interests include virtual environments, audio, physically-based modeling, and robotics. His group has developed multiple software packages that are standard and licensed to 60+ commercial vendors. He has published more than 790 papers & supervised 50 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI, a member of ACM SIGGRAPH and IEEE VR Academies, and a Bézier Award from the Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi and the Distinguished Career in Computer Science Award from the Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which Valve Inc acquired in November 2016.
Purdue University
Title: Designing Behaviorally-Intelligent Agents: Modeling Motion, Interactions, and CollaborationsAbstarct: Recent advances in Robotic technologies are gradually enabling humans and robots to co-exist, co-work, and share spaces in different environments. Robots are increasingly required to navigate in socially-acceptable yet collision-free paths in a crowd in places such as campuses, airports, and shopping malls; to interact and understand people in their homes, workplaces, and hospitals; and to share responsibility for completing tasks, meaning that robots are becoming social partners and teammates with humans. In the future, these socially intelligent robots will likely enter almost all human domains, including healthcare facilities, factories, airports, warehouses, schools, etc.
In this talk, we will focus on our research on the development of autonomous systems to navigate complex and dynamic environments with a high degree of situational awareness and human-like adaptability. This work leverages deep learning and geometric reasoning to create behavior-aware navigation strategies, allowing robots to predict and respond to human movement patterns in real-time.
Biography: A/Prof. Aniket Bera is an Associate Professor at the Department of Computer Science at Purdue University. He directs the interdisciplinary research lab IDEAS (Intelligent Design for Empathetic and Augmented Systems) at Purdue, working on modeling the "human" and "social" aspects using AI in Robotics, Graphics, and Vision. He is also an Adjunct Associate Professor at the University of Maryland at College Park. Prior to this, he was a Research Assistant Professor at the University of North Carolina at Chapel Hill. He received his Ph.D. in 2017 from the University of North Carolina at Chapel Hill. He is currently serving as the Senior Editor for IEEE Robotics and Automation Letters (RA-L) in the area of "Planning and Simulation" and the Conference Chair for the ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG 2022) and Outreach Chair for 22nd ACM SIGGRAPH/EUROGRAPHICS Symposium for Computer Animation (SCA 2023). His core research interests are in Affective Computing, Computer Graphics (AR/VR, Augmented Intelligence, Multi-Agent Simulation), Social Robotics, Autonomous Agents, Cognitive modeling, and planning for intelligent characters. He has advised and co-advised multiple M.S. and Ph.D. students. His work has won multiple best paper awards at top Graphics/VR conferences. He also works with the University of Maryland at Baltimore Medical School to build algorithms and systems to help therapists and doctors detect mental health and social anxiety issues. His research involves novel combinations of methods and collaborations in machine learning, computational psychology, computer graphics, and physically-based simulation to develop real-time computational models to learn human behaviors. Dr. Bera has previously worked in many research labs, including Disney Research, Intel, and the Centre for Development of Advanced Computing. Dr. Bera's research has been featured on CBS, WIRED, Forbes, FastCompany, NPR, etc.
IIIT-Delhi
Biography: Dr. Jainendra Shukla, the founder, and director of the Human-Machine Interaction [HMI] research group at IIIT-Delhi, holds a distinguished Ph.D. with Industrial Doctorate and International Doctorate distinctions from Universitat Rovira i Virgili (URV), Spain. Earlier, he completed an M.Tech. in I.T. with a specialization in Robotics at the Indian Institute of Information Technology, Allahabad (IIIT-Allahabad), and a B.E. in I.T. with First Class and Distinction from the University of Mumbai. He is enthusiastic about empowering machines with emotional intelligence and adaptive interaction ability that can improve the quality of life in health and social care. His research has been disseminated in several venues of international reputation, including CHI, UbiComp, and IEEE Transactions on Affective Computing. He has co-developed the first-ever MOOC course on Affective Computing for NPTEL. He also serves as an Associate Editor on the editorial board of IEEE Transactions on Affective Computing. Currently, as the Principal Investigator, he is leading the European Education and Culture Executive Agency funded project, Capacity Building in Robotics & Autonomous Systems in India [IRAS-HUB]. Previously, his research has been supported by the Startup Research Grant (SRG) from SERB, Government of India, and Industrial Doctorate research grant from the Government of Spain. His contributions have been widely recognized, including an Honorable Mention at CHI 2024 by SIGCHI, and Distinguished Research, Research Excellence, and Teaching Excellence awards by IIIT-Delhi.
IIT Indore
Title: Unravelling the Importance of Remote Photoplethysmography (rPPG) in Affective Computing
Abstract: In the rapidly evolving field of affective computing, the integration of Remote Photoplethysmography (rPPG) has emerged as an innovative technique for non-invasive physiological monitoring. This keynote addresses the pivotal role of rPPG in advancing our understanding and application of affective computing. Remote Photoplethysmography (or rPPG) is a technique that enables the measurement of blood volume changes in the skin using camera-based systems, providing a contactless method to capture physiological signals such as heart rate and respiratory rate. By analyzing subtle changes in skin colour, rPPG offers a unique window into the autonomic nervous system's responses, making it an invaluable tool for emotion recognition and mental health assessment. This talk will delve into the core principles of rPPG and explore its diverse applications within affective computing. We will examine how rPPG can be utilized to detect facial microexpressions, which are critical for understanding genuine emotional states and enhancing lie detection systems.
Additionally, we will discuss the application of rPPG in stress detection, providing insights into real-time monitoring of stress levels in various environments, from workplaces to daily life scenarios. Moreover, the keynote will highlight the potential of rPPG in the early detection and monitoring of depression, offering a non-intrusive method to support mental health professionals. We will also explore its use in other domains, such as anxiety detection, fatigue monitoring, and enhancing user experience in human-computer interaction. By unravelling the importance of rPPG in affective computing, this keynote aims to underscore the transformative impact of this technology on both research and practical applications, paving the way for more sophisticated, empathetic, and responsive systems.
Biography: Dr. Puneet Gupta is currently working as an Associate Professor with the Department of Computer Science and Engineering, at the Indian Institute of Technology (IIT) Indore. His broad research interests include Computer Vision, Deep Learning, and Image Processing. He works to make the current technology useful for human beings by analyzing their behavior. He has worked on fusing multiple biometric traits for authentication; analyzing facial expressions using deep learning; measuring the human-vitals (which are, heart rate, breathing rate and blood pressure) using unobtrusive and non-contact human videos; and cross-modal learning. These play indispensable role in security; affective computing; ambient intelligence; and health-care.
UNSW Sydney
Title: Title: Learning Human Visual Attention and Gaze Communication Behaviours with Applications to Autism Spectrum Disorder Diagnosis
Abstract: Humans employ a complex combination of verbal and non-verbal communication to convey information and express intentions. While the former is extensively used during communication, the latter is just as important. In fact, non-verbal communication such as visual attention and gaze communication behaviours contain a wealth of information about a person’s affective, cognitive and mental states. The limited ability to communicate effectively hinders knowledge acquisition and are noticeable in mental health disorders such as autism spectrum disorders (ASD). There are currently no medical tests that diagnose ASD. Instead, clinicians employ a combination of a comprehensive analysis of developmental history and manual observation of atypical behaviours to diagnose a patient. It is vital to develop new diagnostic methods to ensure that early interventions are provided to optimise health outcomes. In this talk, I will outline our work on developing computational models that aim to mimic and/or quantify visual attention and gaze communication behaviours of neurotypical groups. These models are then trained and/or fine-tuned on datasets that involve neurotypical and ASD individuals and used for ASD diagnosis and severity prediction. Other potential applications of this tool include general human behaviour understanding, digital behaviour phenotyping and human-computer interaction.
Biography: Arcot Sowmya is Professor and Head of School, School of Computer Science and Engineering at the University of New South Wales. She obtained her PhD in Computer Science and Engineering from the Indian Institute of Technology, Bombay. Her research utilises and extends powerful techniques drawn from machine learning and pattern recognition and applies them to the complex problems of learning models for segmentation, classification, recognition and prediction in high-resolution images, including satellite and aerial images, medical images and multimodal datasets including omics data, and motion segmentation and classification in video images. Her research is supported by competitive and industry grants and has led to technology transfer to industry multiple times. She was the Winner of the 2023Telstra Brilliant Women in Digital Health Award for Research.
Organizers #
IIT Kharagpur
Curtin University
Flinders University
Curtin University
Program Committee #
IIT Ropar
IIT Guwahati
Griffith University
Curtin University
Curtin University
IIT Ropar
IIT Ropar
Monash University
IIT Roorkee
Northeastern University
IIT Ropar
Soochow University
IIT Ropar
Registration #
Workshop registration will be handled by ICPR-2024 main conference committee. Please follow ICPR-2024 website for related information.
Contact #
Please contact us if you have any questions.
Email: sanjay.ghosh@ee.iitkgp.ac.in, shreya.ghosh@curtin.edu.au, abhinav.dhall@flinders.edu.au
Image Source: Medium