--- # Jekyll, my lord, please process this page :) ---
"Be wise, generalize!"
Planning is well known to be a hard problem. We are developing methods for acquiring useful knowledge while computing plans for small problem instances. This knowledge is then used to aid planning in larger, more difficult problems.
Often, our approaches can extract algorithmic, generalized plans that solve efficiently large classes of similar problems as well as problems with uncertainty in the quantities of objects that the agent needs to work with. The generalized plans we compute are easier to understand and are generated with proofs of correctness.
A real robot never has perfect sensors or actuators. Instead, an intelligent robot needs to be able to solve the tasks assigned to it while handling uncertainty about the environment as well as about the effects of its own actions. This is a challenging computational problem, but also one that humans solve on a routine basis (we don't have perfect sensors or actuators either!).
We are developing new methods for efficiently expressing and solving problems where the agent has limited, incomplete information about the quantities and identities of the objects that it may encounter.
The objective of this project is to introduce AI planning concepts using mobile manipulator robots. It uses a visual programming interface to make these concepts easier to grasp. Users can get the robot to accomplish desired tasks by dynamically populating puzzle shaped blocks encoding the robot's possible actions. This allows users to carry out navigation, planning and manipulation by connecting blocks instead of writing code. AI explanation techniques are used to inform a user if their plan to achieve a particular goal fails. This helps them better grasp the fundamentals of AI planning.
How would a non-expert assess what their AI system can or can’t do safely? Today’s AI systems require experts to evaluate them, which limits the deployability and safe usability of AI systems.
We are developing approaches for autonomous, user-driven assessment of the capabilities of black-box taskable AI systems, even as the AI systems learn and adapt. These methods would enable users to continually evaluate and understand their AI systems in their own idiosyncratic deployments. They would prevent performance failures and accidents that can arise when AI systems are used beyond their dynamic envelopes of safe applicability. We are also designing approaches for computing user-aligned explanations of AI behavior. Together, these approaches improve the safety and usability of AI systems and enable autonomous, on-the-fly training paradigms for AI systems.
In order to solve complex, long-horizon tasks such as doing the laundry, a robot needs to compute high-level strategies (e.g., would it be useful to put all the dirty clothes in a basket first?) as well as the joint movements that it should execute. Unfortunately, approaches for high-level planning rely on task-planning abstractions that are lossy and can produce “solutions” that have no feasible executions.
We are developing new methods for computing safe task-planning abstractions and for dynamically refining the task-planning abstraction to produce combined task and motion plans that are guaranteed to be executable. We are also working on utilizing abstractions in sequential decision making (SDM) for evaluating the effect of abstractions on models for SDM, as well as to search for abstractions that would aid in solving a given SDM problem.
Learning abstractions for RL in non-image-based tasks is hard.— Siddharth Srivastava (@sidsrivast) June 15, 2023
We were surprised to find that learning a conditional abstraction tree (CAT) while doing RL can improve sample efficiency to the point where vanilla Q-learning can outperform SOTA RL methods. #UAI2023
Happy to share a new class of algorithms for generalized planning: https://t.co/wOLOj2F8sH— Siddharth Srivastava (@sidsrivast) December 8, 2022
The paper addresses a key problem in computing general plans/policies: will a given generalized plan terminate/reach good states? This is unsolvable in general due to the halting problem. pic.twitter.com/aUXn2MfBNi
Our #NeurIPS2022 paper develops a new approach for few-shot learning of generalized policy automata (GPA) for relational stochastic shortest path planning problems.— Siddharth Srivastava (@sidsrivast) November 23, 2022
The learned GPAs can be used to transfer learning and accelerate SSP solvers on much larger problem instances! pic.twitter.com/dRp5sZNeps
We’d like AI systems to continually learn and adapt, but how would a user figure out what their black-box AI (BBAI) system can safely do at any point?— Siddharth Srivastava (@sidsrivast) July 31, 2022
This is difficult especially when the user and the BBAI use different representations. Our #KR2022 work addresses this problem. pic.twitter.com/0hzhk7QNZ0
Reliable planning and learning in problems without image-based state representations remains challenging. In our #IJCAI2022 work we found that doing an abstraction before learning results in generalized Q functions and zero-shot transfer to much larger problems!— Siddharth Srivastava (@sidsrivast) July 20, 2022
w/ @KariaRushang pic.twitter.com/ksFt7FWksv
Consider submitting your recent work on generalization/transfer in all forms of planning and sequential decision making! Due ~today, May 20th in NeurIPS or IJCAI format https://t.co/1DHcFEcFIt— Siddharth Srivastava (@sidsrivast) May 20, 2022
Congratulations to @shah_naman, @pulkit_verma, Trevor Angle and the entire dev team for creating JEDAI and winning the Best Demo Award @aamas2022! 🎉— Siddharth Srivastava (@sidsrivast) May 14, 2022
JEDAI (JEDAI explains decision-making AI) is an interactive learning tool for AI+robotics that provides explanations autonomously.
Task and motion planning (TAMP) captures the essence of what we want robots to be able to do in a range of settings. But cobots need to communicate with humans to avoid potential conflicts. Our #ICRA2022 work develops a unified framework for integrating communication with TAMP. pic.twitter.com/ObVDkWSZtZ— Siddharth Srivastava (@sidsrivast) April 21, 2022
Consider submitting a piece about your work on representations for generalization and transfer in all forms of AI planning/sdm! Links below. https://t.co/ogoNFMzvQo— Siddharth Srivastava (@sidsrivast) April 19, 2022
We use hand-coded state and action abstractions extensively in AI planning. Where do these abstractions come from? Our #aamas2022 paper develops methods for learning such abstractions from scratch and for using them to solve robot planning problems efficiently and reliably.🗝️🥡👇 pic.twitter.com/lp3U8rpK6R— Siddharth Srivastava (@sidsrivast) March 5, 2022
Can we assess what Black-Box #AI systems can and can’t do reliably as they change/adapt to changing situations?— Siddharth Srivastava (@sidsrivast) December 3, 2021
In our #AAAI2022 paper, @pulkit_verma and @rashmeet_nayyar (= contributors) address this with the foundations for efficient /differential/ assessment of AI systems. pic.twitter.com/5xmOmSS8XL
Excited to be a part of the workshop on Generalization in Planning (GenPlan) at IJCAI '21!— Siddharth Srivastava (@sidsrivast) April 19, 2021
We welcome current/recent work on the synthesis or learning of plans and policies with an emphasis on generalizability and transfer.
Submission deadline: May 9.https://t.co/bfwxUGXXqm
Congratulations to @pulkit_verma and Rushang Karia for their first first-authored papers! Pulkit’s work investigates how users may assess the limits and capabilities of their AI systems while Rushang’s develops self-training algorithms for speeding up AI planning. #aaai2021 (1/2)— Siddharth Srivastava (@sidsrivast) December 16, 2020
Imagine a robot doing your laundry or making a cup of tea for you. #ASUEngineering @CIDSEASU assistant professor @sidsrivast is working to equip #artificialintelligence with the capability to do real-world tasks. #AI https://t.co/RrcfGUP0wr— ASU Ira A. Fulton Schools of Engineering (@ASUEngineering) August 11, 2020
Can we use #DeepLearning to speed up robot planning while maintaining theoretical guarantees (correctness, probabilistic completeness)? Dan and Kislay's work indicates YES, but with a few changes to planning algorithms. Check it out at https://t.co/tu7TSRcwBT and @icra20 online! pic.twitter.com/fQQ13qyzwJ— Siddharth Srivastava (@sidsrivast) May 31, 2020
Congratulations to Rashmeet for winning the Chambliss Medal for her research project on using #AI for reliable inference about intergalactic space! @asunow article featuring this interdisciplinary #ASUEngineering work: https://t.co/bkMLSxT5Wo— Siddharth Srivastava (@sidsrivast) May 27, 2020
Our new “anytime” algorithm for stochastic task and motion planning computes better robot policies as it gets more time. Our YuMi uses this to autonomously build Keva structures! @icra2020— Siddharth Srivastava (@sidsrivast) March 20, 2020
Personal favorite: https://t.co/BPti3sJp1E
Paper + videos: https://t.co/lsDf7So4gl pic.twitter.com/zWLCZlL71h
Congrats to Daniel, Naman, Kislay, Deepak & Pranav for their first papers, accepted at #ICRA2020! Their #NSF_CISE funded #AI research shows how to compute reliable robot plans more efficiently using (1)abstractions & (2)#DeepLearning. Camera-readies coming soon! #ICRA #NSFfunded— Siddharth Srivastava (@sidsrivast) January 23, 2020
How can we train people to use adaptive #AI systems, whose behavior and functionality is expected to change from day to day? Our approach: make the AI system self-explaining! #ASUFoW https://t.co/VwFs30sJU2— Siddharth Srivastava (@sidsrivast) September 14, 2019
As anyone who has talked to a 3yo knows, explaining why something can’t be done can be much harder than explaining a solution. Can #AI systems explain why they failed to solve a given problem? Sarath's new work takes first steps in explaining unsolvability https://t.co/lQuRSeKc4b— Siddharth Srivastava (@sidsrivast) March 27, 2019
Dan and Kislay's new work on motion planning tries to get the best of both worlds: the learn and link planner (LLP) learns and improves with experience. It is also sound and probabilistically complete. Check out the results here: https://t.co/v21OfXJgtB— Siddharth Srivastava (@sidsrivast) March 11, 2019
Our new work aims at allowing designers and users to choose whether their AI systems clarify, or protect information.— AAIR Lab (@AAIRLabASU) February 5, 2019
A Unified Framework for Planning in Adversarial and Cooperative Environments. Kulkarni et al. https://t.co/7ibzWE7KwC#AAAI2019 https://t.co/BQZW0umryL
Alfred says Hello!! https://t.co/08H96AoUMq— AAIR Lab (@AAIRLabASU) January 29, 2019
Associate ProfessorDirector, AAIR Lab
|Kyle Joseph Atkinson||B.S.||Jul 2022||Graduate Student, Arizona State University|
|Kiran Prasad||M.S.||Jul 2022||Software Developer, Amazon Robotics|
|Shashank Rao Marpally||M.S.||May 2021||PhD Student, National University of Singapore|
|Deepak Kala Vasudevan||M.S.||Dec 2020||Software Developer, Amazon Robotics|
|Abhyudaya Srinet||M.S.||Aug 2020||CTO, MentR-Me and MiM-Essay.com|
|Kislay Kumar||M.S.||Dec 2019||Software Developer, Amazon Robotics AI|
|Chirav Dave||M.S.||Dec 2019||Technical Lead, A10 Networks, Inc.|
|Daniel Molina||M.S.||May 2019||Machine Learning Engineer, State Farm|
|Midhun P. M.||M.C.S.||Dec 2018||Staff Software Engineer, Bazaarvoice|
|Julia Nakhleh||B.S.||May 2019||Graduate Student, University of Wisconsin-Madison|
|Ryan Christensen||B.S.||May 2018||Data Engineer, Tellic|
|Perry Wang||B.S.||May 2018|