taesiri commited on
Commit
8e375f4
1 Parent(s): 3f9798d

Upload abstract/2201.07207.txt with huggingface_hub

Browse files
Files changed (1) hide show
  1. abstract/2201.07207.txt +1 -0
abstract/2201.07207.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Can world knowledge learned by large language models can be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (e.g., "make breakfast"), to a chosen set of actionable steps (e.g., "open fridge"). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained language models are large enough and prompted appropriately, they can effectively decompose high-level tasks into mid-level plans without any further training. However, the plans produced naively by language models often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the language model baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models. Project's website.