A2I: Affective Artificial Intelligence (ICPR 2024)

9 minutes


Full-day, 1 Dec 2024, Kolkata


Introduction #

The workshop on Affective Artificial Intelligence (A2I) at ICPR 2024 aims to encourage and highlight state-of-the-art research in affective computing and applications. The key focuses are on novel neural network architectures, incorporating anatomical insights and constraints, introducing new and challenging datasets, and exploiting multi-modal training.

Call for Contributions #

Full Workshop Papers #

The workshop topics include (but are not limited to):

  • Large-scale data generation or Inexpensive annotation for Affective Computing
  • AI methods for Affective Computing with multimodal data
  • Multi-modal method for emotion recognition
  • Explainable and/or Privacy Preserving AI in affective computing
  • Generative and responsible personalization of affective phenomena estimators with few-shot learning
  • Bias in affective computing data (e.g. lack of multi-cultural datasets)
  • Semi-/weak-/un-/self- supervised learning methods, domain adaptation methods, and other novel methods for Affective Computing
  • Applications in education, entertainment & healthcare

We will be accepting the submission of full unpublished and original papers. These papers will be peer-reviewed via a single-blind process. The accepted papers in the workshops will be published in Lecture Notes in Computer Science (LNCS), Springer (https://www.springer.com/gp/computer-science/lncs). Please note that it will be published after the workshop.

Submission #

We invite authors to submit unpublished papers (15 pages including references ICPR format) to our workshop, to be presented at an oral/poster session upon acceptance. All submissions will go through a single-blind review process. All contributions must be submitted (along with supplementary materials, if any) at the Microsoft CMT.

Note #

Authors of previously rejected main conference submissions are also welcome to submit their work to our workshop. When doing so, you must submit the previous reviewers’ comments (named as previous_reviews.pdf) and a letter of changes (named as letter_of_changes.pdf) as part of your supplementary materials to clearly demonstrate the changes made to address the comments made by previous reviewers.

Important Dates #

Paper Submission DeadlineAugust 15, 2024 Sep 1, 2024 (11:55 pm Anywhere on Earth)
Notification to AuthorsSeptember 15, 2024 (11:55 pm Anywhere on Earth)
Camera-Ready DeadlineSeptember 25, 2024 (11:55 pm Anywhere on Earth)

Workshop Schedule #

TBD

Invited Keynote Speakers #

Prof. Dinesh Manocha
University of Maryland
Title: Multimodal and Context-Aware Emotion Perception

Abstarct: Human emotion perception is integral to intelligent systems' wide applications, including behavior prediction, social robotics, medicine, surveillance, and entertainment. Current literature advocates that humans perceive emotions and behavior from various human modalities and situational and background contexts. Our research focuses on this aspect of emotion perception, as we attempt to build emotion perception models from multiple modalities and contextual cues and use such ideas of perception for various real-world domains of AI applications. We will go over both parts in this talk. In the first part, we will explore two approaches to improve emotion perception models. In one approach, we will leverage the idea of using more than one modality to perceive human emotion. In the other approach, we leverage contextual information; background scene, multiple modalities of the human subject, and socio-dynamic inter-agent interactions available in the input to predict the perceived emotion. In the second part, we will explore three domains of AI applications; i) video manipulations and deepfake detection, ii) multimedia content analysis, and iii) social media interactions investigation to enrich solutions with ideas from emotion perception.


Biography: Prof. Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and a Distinguished University Professor at the University of Maryland College Park. His research interests include virtual environments, audio, physically-based modeling, and robotics. His group has developed multiple software packages that are standard and licensed to 60+ commercial vendors. He has published more than 790 papers & supervised 50 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, IEEE, and NAI, a member of ACM SIGGRAPH and IEEE VR Academies, and a Bézier Award from the Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi and the Distinguished Career in Computer Science Award from the Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which Valve Inc acquired in November 2016.



A/Prof. Aniket Bera
Purdue University

Biography: A/Prof. Aniket Bera is an Associate Professor at the Department of Computer Science at Purdue University. He directs the interdisciplinary research lab IDEAS (Intelligent Design for Empathetic and Augmented Systems) at Purdue, working on modeling the "human" and "social" aspects using AI in Robotics, Graphics, and Vision. He is also an Adjunct Associate Professor at the University of Maryland at College Park. Prior to this, he was a Research Assistant Professor at the University of North Carolina at Chapel Hill. He received his Ph.D. in 2017 from the University of North Carolina at Chapel Hill. He is also the founder of Project Dost. He is currently serving as the Senior Editor for IEEE Robotics and Automation Letters (RA-L) in the area of "Planning and Simulation" and the Conference Chair for the ACM SIGGRAPH Conference on Motion, Interaction and Games (MIG 2022). His core research interests are in Affective Computing, Computer Graphics (AR/VR, Augmented Intelligence, Multi-Agent Simulation), Social Robotics, Autonomous Agents, Cognitive modelling, and planning for intelligent characters. He has advised and co-advised multiple M.S. and Ph.D. students. He has authored over 70+ papers, 2000+ citations and his work has won multiple awards at top Graphics/VR conferences. He also works with the University of Maryland at Baltimore Medical School to build algorithms and systems to help therapists and doctors detect mental health and social anxiety issues (AI + Mental Health). His research involves novel combinations of methods and collaborations in machine learning, computational psychology, computer graphics, and physically-based simulation to develop real-time computational models to learn human behaviours. A/Prof. Bera has previously worked in many research labs, including Disney Research, Intel, and the Centre for Development of Advanced Computing. A/Prof. Bera's research has been featured on CBS, WIRED, Forbes, FastCompany, Times of India etc.


Dr. Jainendra Shukla
IIIT-Delhi

Biography: Dr. Jainendra Shukla, the founder, and director of the Human-Machine Interaction [HMI] research group at IIIT-Delhi, holds a distinguished Ph.D. with Industrial Doctorate and International Doctorate distinctions from Universitat Rovira i Virgili (URV), Spain. Earlier, he completed an M.Tech. in I.T. with a specialization in Robotics at the Indian Institute of Information Technology, Allahabad (IIIT-Allahabad), and a B.E. in I.T. with First Class and Distinction from the University of Mumbai. He is enthusiastic about empowering machines with emotional intelligence and adaptive interaction ability that can improve the quality of life in health and social care. His research has been disseminated in several venues of international reputation, including CHI, UbiComp, and IEEE Transactions on Affective Computing. He has co-developed the first-ever MOOC course on Affective Computing for NPTEL. He also serves as an Associate Editor on the editorial board of IEEE Transactions on Affective Computing. Currently, as the Principal Investigator, he is leading the European Education and Culture Executive Agency funded project, Capacity Building in Robotics & Autonomous Systems in India [IRAS-HUB]. Previously, his research has been supported by the Startup Research Grant (SRG) from SERB, Government of India, and Industrial Doctorate research grant from the Government of Spain. His contributions have been widely recognized, including an Honorable Mention at CHI 2024 by SIGCHI, and Distinguished Research, Research Excellence, and Teaching Excellence awards by IIIT-Delhi.



Dr. Puneet Gupta
IIT Indore

Title: Unravelling the Importance of Remote Photoplethysmography (rPPG) in Affective Computing

Abstract: In the rapidly evolving field of affective computing, the integration of Remote Photoplethysmography (rPPG) has emerged as an innovative technique for non-invasive physiological monitoring. This keynote addresses the pivotal role of rPPG in advancing our understanding and application of affective computing. Remote Photoplethysmography (or rPPG) is a technique that enables the measurement of blood volume changes in the skin using camera-based systems, providing a contactless method to capture physiological signals such as heart rate and respiratory rate. By analyzing subtle changes in skin colour, rPPG offers a unique window into the autonomic nervous system's responses, making it an invaluable tool for emotion recognition and mental health assessment. This talk will delve into the core principles of rPPG and explore its diverse applications within affective computing. We will examine how rPPG can be utilized to detect facial microexpressions, which are critical for understanding genuine emotional states and enhancing lie detection systems.
Additionally, we will discuss the application of rPPG in stress detection, providing insights into real-time monitoring of stress levels in various environments, from workplaces to daily life scenarios. Moreover, the keynote will highlight the potential of rPPG in the early detection and monitoring of depression, offering a non-intrusive method to support mental health professionals. We will also explore its use in other domains, such as anxiety detection, fatigue monitoring, and enhancing user experience in human-computer interaction. By unravelling the importance of rPPG in affective computing, this keynote aims to underscore the transformative impact of this technology on both research and practical applications, paving the way for more sophisticated, empathetic, and responsive systems.


Biography: Dr. Puneet Gupta is currently working as an Associate Professor with the Department of Computer Science and Engineering, at the Indian Institute of Technology (IIT) Indore. His broad research interests include Computer Vision, Deep Learning, and Image Processing. He works to make the current technology useful for human beings by analyzing their behavior. He has worked on fusing multiple biometric traits for authentication; analyzing facial expressions using deep learning; measuring the human-vitals (which are, heart rate, breathing rate and blood pressure) using unobtrusive and non-contact human videos; and cross-modal learning. These play indispensable role in security; affective computing; ambient intelligence; and health-care.


Prof. Arcot Sowmya
UNSW Sydney

Biography: Prof. Arcot Sowmya is Professor in the School of Computer Science and Engineering at the University of New South Wales. She obtained her PhD in Computer Science and Engineering from Indian Institute of Technology, Bombay in 1992. Prof. Sowmya's research is in two areas: image analysis and recognition, and software engineering. The first area focuses on segmentation and classification of images, with techniques drawn from machine learning, pattern recognition and statistical data analysis. A major application focus has been on feature extraction, recognition and understanding of high resolution images, in particular satellite and aerial images and medical images. Motion segmentation and classification in video images in another major application area, in particular motion tracking in AVIE at the iCinema centre, and face tracking. Past and current research projects include road network extraction from high resolution aerial images and digital maps, symbolic learning techniques for object recognition, real-time resampling and tracking algorithms and motion tracking and recognition for interactive cinema and medical image understanding and diagnosis on HRCT images of the lung. The second area of interest focuses on formal methods of specification, verification and design of real-time, reactive and distributed systems. Techniques are based on process algebras, temporal logic, simulation and deduction. Past and current projects include verification of statecharts using logic-based techniques, real-time (robot) control software development using Esterel, design reuse techniques for component-based embedded system development and protocol modelling and verification for on-chip communication protocols as well as web service protocols.


Organizers #

Sanjay Ghosh
IIT Kharagpur
Shreya Ghosh
Curtin University
Abhinav Dhall
Flinders University
Tom Gedeon
Curtin University

Program Committee (To be updated) #

Neeru Dubey
KTH Sweden
Parul Gupta
Monash University
Prashant Patil
IIT Guwahati
Shruti Shantiling Phutke
Griffith University
Yue Yao
Curtin University
Rakibul Hasan
Curtin University
Surbhi Madan
IIT Ropar
Hrishav Bakul Barua
Monash University
Deepak Kumar
IIT Roorkee

Registration #

Workshop registration will be handled by ICPR-2024 main conference committee. Please follow ICPR-2024 website for related information.

Contact #

Please contact us if you have any questions.
Email: sanjay.ghosh@ee.iitkgp.ac.in, shreya.ghosh@curtin.edu.au, abhinav.dhall@flinders.edu.au

Image Source: Medium