RAT: Retrieval Augmented Thoughts Elicit Context-Aware Reasoning in Long-Horizon Generation
Abstract
We explore how iterative revising a chain of thoughts with the help of information retrieval significantly improves large language models' reasoning and generation ability in long-horizon generation tasks, while hugely mitigating hallucination. In particular, the proposed method -- *retrieval-augmented thoughts* (RAT) -- revises each thought step one by one with retrieved information relevant to the task query, the current and the past thought steps, after the initial zero-shot CoT is generated. Applying RAT to GPT-3.5, GPT-4, and CodeLLaMA-7b substantially improves their performances on various long-horizon generation tasks; on average of relatively increasing rating scores by 13.63% on code generation, 16.96% on mathematical reasoning, 19.2% on creative writing, and 42.78% on embodied task planning. The demo page can be found at https://craftjarvis.github.io/RAT
Community
This paper was selected and reviewed as spotlight paper at Harmonious for the week of March 11, 2024:
https://www.harmonious.ai/t/weekly-paper-roundup-rat-retrieval-augmented-thoughts-3-11-2024/43
Authors: please comment/correct as appropriate.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper