IJCAI 2025 Workshop on

User-Aligned Assessment of
Adaptive AI Systems


Room 516E, Palais des congrès

Montreal, Canada

August 18, 2025

Overview

AI systems are increasingly interacting with users who are not experts in AI. This has led to growing calls for better safety assessment and regulation of AI systems. However, broad questions remain on the processes and technical approaches that would be required to conceptualize, express, manage, and enforce such regulations for adaptive AI systems, which by nature, are expected to exhibit different behaviors while adapting to evolving user requirements and deployment environments.

This workshop will foster research and development of new paradigms for assessment and design of AI systems that are not only efficient according to a task-based performance measure, but also safe to use by diverse groups of users and compliant with the relevant regulatory frameworks. It will highlight and engender research on new paradigms and algorithms for assessing AI systems' compliance with a variety of evolving safety and regulatory requirements, along with methods for expressing such requirements.

We also expect that the workshop will lead to a productive exchange of ideas across two highly active fields of research, viz., AI and formal methods. The organization team includes active researchers from both fields and our pool of invited speakers features prominent researchers from both areas.

Call for Papers

Although there is a growing need for independent assessment and regulation of AI systems, broad questions remain on the processes and technical approaches that would be required to conceptualize, express, manage, assess, and enforce such regulations for adaptive AI systems.

This workshop addresses research gaps in assessing the compliance of adaptive AI systems (systems capable of planning/learning) in the presence of post-deployment changes in requirements, in user-specific objectives, in deployment environments, and in the AI systems themselves.

These research problems go beyond the classical notions of verification and validation, where operational requirements and system specifications are available a priori. In contrast, adaptive AI systems such as household robots are expected to be designed to adapt to day-to-day changes in the requirements (which can be user-provided), environments, and as a result of system updates and learning. The workshop will feature invited talks by researchers from AI and formal methods, as well as talks on contributed papers.

Topics of interest include:


Submission Instructions

Submissions can describe either work in progress or mature work that has already been published at another research venue. We also welcome "highlights" papers summarizing and highlighting results from multiple recent papers by the authors. Submissions of papers being reviewed at other venues (NeurIPS, CoRL, ECAI, KR, etc.) are welcome since AIA 2025 is a non-archival venue and we will not require a transfer of copyright. If such papers are currently under blind review, please anonymize the submission.

Submissions should use the IJCAI 2025 style. . Papers under review at other venues can use the style file of that venue, but the camera-ready versions of accepted papers will be required in the IJCAI 2025 format by the camera-ready deadline. The papers should adhere to the IJCAI Code of Conduct for the Authors, the IJCAI Code of Ethics, and the NeurIPS 2025 policy on using LLMs.

Three types of papers can be submitted:

Papers can be submitted via OpenReview at https://openreview.net/group?id=ijcai.org/IJCAI/2025/Workshop/AIA.

Important Dates

Announcement and call for submissions April 09, 2025
Paper submission deadline May 27, 2025 (11:59 PM UTC-12)
Author notification June 13, 2025
Workshop August 18, 2025

Invited Talks






Reid Simmons
Reid Simmons
Carnegie Mellon University, USA

Counterfactual Explanations for Better Grounding
Clark's Common Ground Theory posits that successful communication needs grounding, which is a convergence of mutual beliefs, obtained with the least joint effort. In practice, this means that AI agents need to have a model of what people do, and do not, already know in order to effectively communicate with them. In particular, for explaining the agent's policy, we have found that counterfactual explanations are critical - that is, providing explanations that communicate relevant differences between what the agent knows and what it believes that the person already knows. Inferring what the person knows is modeled as an iterative process - the AI agent provides some information, the person responds in some way and, based on the response, the agent refines its model of what it believes the person knows. This talk will present the overall framework of grounding via counterfactual explanations and prior and ongoing research projects that use this framework to achieve better grounding.

Bio: Dr. Reid Simmons is a Research Professor in the Robotics Institute and Computer Science Department at Carnegie Mellon University. He received his PhD from MIT in Artificial Intelligence, and since coming to CMU in 1988, his research has focused on developing self-reliant robots that can autonomously operate over extended periods of time in unknown, unstructured environments, and on human-robot social interaction, especially non-verbal communication through affect, proxemics, motion, and gesture. He is co-PI and Research Director for the NSF-sponsored AI-CARING Institute. Dr. Simmons is an author of over 250 publications on AI, Robotics, and Human-Robot Interaction and has graduated 25 PhD students. He previously served as a Program Director at the National Science Foundation, where he oversaw the National Robotics Initiative and initiated the Smart and Autonomous Systems program. In 2018, Dr. Simmons helped found the first-in-the-nation standalone undergraduate major in Artificial Intelligence and currently serves as its program director. He is a Fulbright Scholar, a Fellow of the Association for the Advancement of Artificial Intelligence, a Senior Member of IEEE, and was an ONR Summer Faculty Fellow in 2022.



Xujie Si

Xujie Si
University of Toronto, Canada


The Science and Engineering of Autoformalizing Mathematics: A Case Study in Euclidean Geometry
Formalizing mathematics into machine-checkable logic is essential for advancing scientific rigor and enabling powerful AI reasoning. However, the process of translating informal mathematical text into formal languages remains a major bottleneck. This talk explores the challenge of autoformalization - the automated conversion of natural mathematical language into formal logic - through the lens of Euclidean geometry, one of the oldest and most foundational domains in mathematics. I will present insights from our recent work on LeanEuclid and PyEuclid, which demonstrate how modern Large Language Models (LLMs), combined with formal methods, can help bridge the gap between informal and formal mathematical reasoning.

Bio: Xujie Si is an Assistant Professor in the Department of Computer Science at the University of Toronto. He is also a faculty affiliate at Vector Institute and an external affiliate member at Mila, the Quebec AI Institute, where he holds a Canada CIFAR AI Chair. His research centers on automated reasoning and formalization, neuro-symbolic systems, and developing foundational abstractions for reliable and explainable AI (XAI). His recent work concerns formalizing mathematics, program synthesis and verification with deep learning techniques, learning verifiably correct specifications for neural networks, and interpretable rule learning from perceptual data. His works have been recognized with the ACM SIGPLAN distinguished paper award and oral/spotlight presentations at top programming languages and machine learning conferences.




Ruqi Zhang
Ruqi Zhang
Purdue University, USA


Aligned and Safe LLMs via Probabilistic Modeling
As large language models (LLMs) are increasingly deployed in complex and high-stakes applications, ensuring their alignment and safety is more important than ever. In this talk, I will explore how probabilistic modeling provides principled and effective approaches for addressing these challenges. First, I will introduce a framework that casts LLM alignment as a problem of probabilistic inference, and present two discrete sampling techniques for efficient inference. Then, I will show how variational inference can be used to automatically uncover diverse adversarial inputs, providing a comprehensive, distributional characterization of model vulnerabilities. Finally, I will conclude by outlining promising directions for future research.

Bio: Ruqi Zhang is an Assistant Professor in the Department of Computer Science at Purdue University. Her research focuses on machine learning, generative modeling, and probabilistic methods. Prior to joining Purdue, she was a postdoctoral researcher at the Institute for Foundations of Machine Learning (IFML) at the University of Texas at Austin. She received her Ph.D. from Cornell University. Dr. Zhang has been a key organizer of Symposium on Advances in Approximate Bayesian Inference for four years. She has served as an Area Chair and Editor for top ML conferences and journals, including ICML, NeurIPS, AISTATS, UAI, and TMLR. Her contributions have been recognized with several honors, including AAAI New Faculty Highlights, Amazon Research Award, Spotlight Rising Star in Data Science from University of Chicago, Seed for Success Acorn Award, and Ross-Lynn Research Scholar.

Program

09:00 AM Workshop Opening
09:05 AM Invited Talk: Reid Simmons
Counterfactual Explanations for Better Grounding
09:45 AM Session Chair: Hazem Torfah
Contributed Talks
10:30 AM Coffee Break
11:00 AM Invited Talk: Xujie Si
The Science and Engineering of Autoformalizing Mathematics: A Case Study in Euclidean Geometry
11:40 AM Session Chair: Georgios Fainekos
Contributed Talks
12:30 PM Lunch Break
02:00 PM Invited Talk: Ruqi Zhang
Aligned and Safe LLMs via Probabilistic Modeling
02:40 PM Session Chair: YooJung Choi
Contributed Talk
02:50 PM Lightening Poster Introductions
03:05 PM Poster Session 1:
03:30 PM Coffee Break (Poster sessions will continue in Coffee Break)
03:40 PM Poster Session 2:
04:10 PM Session Chair: Pulkit Verma
Contributed Talks
05:00 PM Panel Discussion
Moderator: Pulkit Verma
Panelists: Reuth Mirsky, Sriraam Natarajan, Reid Simmons, and Xujie Si.
05:45 PM Closing


Related Events

Organizing Committee


Pulkit Verma
Pulkit Verma
Massachusetts Institute of Technology, USA


YooJung Choi
YooJung Choi
Arizona State University, USA


Georgios Fainekos
Georgios Fainekos
Toyota Motor North America R&D, USA




Pulkit Verma
Siddharth Srivastava
Arizona State University, USA


Hazem Torfah
Hazem Torfah
Chalmers University of Technology, Sweden




Program Committee