From the Real World to Logic and Back:
Learning Symbolic World Models for
Long-Horizon Planning

1Arizona State University
2Brown University
Conference of Robot Learning (CoRL) 2025

*Indicates Equal Contribution

Trained on just a handful of simple pick-and-place demos, our robots learned their own symbolic world models—and then zero-shot scaled to tasks 10-18x bigger and harder. From packing boxes to building Keva towers to setting real dinner tables, the same learned abstractions let robots tackle far more objects, longer horizons, and unseen environments—without extra supervision.

Abstract

Robots still lag behind humans in their ability to generalize from limited experience, particularly when transferring learned behaviors to long-horizon tasks in unseen environments. We present the first method that enables robots to autonomously invent symbolic, relational concepts directly from a small number of raw, unsegmented, and unannotated demonstrations. From these, the robot learns logic-based world models that support zero-shot generalization to tasks of far greater complexity than those in training. Our framework achieves performance on par with hand-engineered symbolic models, while scaling to execution horizons far beyond training and handling up to 18 times more objects than seen during learning. The results demonstrate a framework for autonomously acquiring transferable symbolic abstractions from raw robot experience, contributing toward the development of interpretable, scalable, and generalizable robot planning systems.

BibTeX

@inproceedings{shah2025reals2logic,
  title={From the Real World to Logic and Back: Learning Symbolic World Models for Long-Horizon Planning},
  author={Shah, Naman, and Nagpal, Jayesh, and Srivastava, Siddharth},
  booktitle={Proceedings of the Conference on Robot Learning (CoRL)},
  year={2025},
  url={https://aair-lab.github.io/r2l-lamp}
}