taesiri commited on
Commit
0da3527
1 Parent(s): 8475652

Upload abstract/2306.17840.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2306.17840.txt +1 -0
abstract/2306.17840.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Large language models (LLMs) provide a promising tool that enable robots to perform complex robot reasoning tasks. However, the limited context window of contemporary LLMs makes reasoning over long time horizons difficult. Embodied tasks such as those that one might expect a household robot to perform typically require that the planner consider information acquired a long time ago (e.g., properties of the many objects that the robot previously encountered in the environment). Attempts to capture the world state using an LLM's implicit internal representation is complicated by the paucity of task- and environment-relevant information available in a robot's action history, while methods that rely on the ability to convey information via the prompt to the LLM are subject to its limited context window. In this paper, we propose Statler, a framework that endows LLMs with an explicit representation of the world state as a form of "memory" that is maintained over time. Integral to Statler is its use of two instances of general LLMs - a world-model reader and a world-model writer - that interface with and maintain the world state. By providing access to this world state "memory", Statler improves the ability of existing LLMs to reason over longer time horizons without the constraint of context length. We evaluate the effectiveness of our approach on three simulated table-top manipulation domains and a real robot domain, and show that it improves the state-of-the-art in LLM-based robot reasoning. Project's website: "project's website".