AI systems are increasingly interacting with users who are not experts in AI. This has led to growing calls for better safety assessment and regulation of AI systems. However, broad questions remain on the processes and technical approaches that would be required to conceptualize, express, manage, and enforce such regulations for adaptive AI systems, which by nature, are expected to exhibit different behaviors while adapting to evolving user requirements and deployment environments.
This workshop will foster research and development of new paradigms for assessment and design of AI systems that are not only efficient according to a task-based performance measure, but also safe to use by diverse groups of users and compliant with the relevant regulatory frameworks. It will highlight and engender research on new paradigms and algorithms for assessing AI systems' compliance with a variety of evolving safety and regulatory requirements, along with methods for expressing such requirements.
We also expect that the workshop will lead to a productive exchange of ideas across two highly active fields of research, viz., AI and formal methods. The organization team includes active researchers from both fields and our pool of invited speakers features prominent researchers from both areas.
Although there is a growing need for independent assessment and regulation of AI systems, broad questions remain on the processes and technical approaches that would be required to conceptualize, express, manage, assess, and enforce such regulations for adaptive AI systems.
This workshop addresses research gaps in assessing the compliance of adaptive AI systems (systems capable of planning/learning) in the presence of post-deployment changes in requirements, in user-specific objectives, in deployment environments, and in the AI systems themselves.
These research problems go beyond the classical notions of verification and validation, where operational requirements and system specifications are available a priori. In contrast, adaptive AI systems such as household robots are expected to be designed to adapt to day-to-day changes in the requirements (which can be user-provided), environments, and as a result of system updates and learning. The workshop will feature invited talks by researchers from AI and formal methods, as well as talks on contributed papers.
Topics of interest include:
Submissions can describe either work in progress or mature work that has already been published at another research venue. We also welcome "highlights" papers summarizing and highlighting results from multiple recent papers by the authors. Submissions of papers being reviewed at other venues (NeurIPS, CoRL, ECAI, KR, etc.) are welcome since AIA 2025 is a non-archival venue and we will not require a transfer of copyright. If such papers are currently under blind review, please anonymize the submission.
Submissions should use the IJCAI 2025 style. . Papers under review at other venues can use the style file of that venue, but the camera-ready versions of accepted papers will be required in the IJCAI 2025 format by the camera-ready deadline. The papers should adhere to the IJCAI Code of Conduct for the Authors, the IJCAI Code of Ethics, and the NeurIPS 2025 policy on using LLMs.
Three types of papers can be submitted:
Papers can be submitted via OpenReview at https://openreview.net/group?id=ijcai.org/IJCAI/2025/Workshop/AIA.
Announcement and call for submissions | April 09, 2025 |
Paper submission deadline | May 16, 2025 (11:59 PM UTC-12) |
Author notification | June 06, 2025 |
Workshop | August 16-18, 2025 (Exact date TBD) |