Papers
arxiv:2307.15337

Skeleton-of-Thought: Large Language Models Can Do Parallel Decoding

Published on Jul 28, 2023
· Featured in Daily Papers on Jul 31, 2023
Authors:
,
,

Abstract

This work aims at decreasing the end-to-end generation latency of large language models (LLMs). One of the major causes of the high generation latency is the sequential decoding approach adopted by almost all state-of-the-art LLMs. In this work, motivated by the thinking and writing process of humans, we propose "Skeleton-of-Thought" (SoT), which guides LLMs to first generate the skeleton of the answer, and then conducts parallel API calls or batched decoding to complete the contents of each skeleton point in parallel. Not only does SoT provide considerable speed-up (up to 2.39x across 11 different LLMs), but it can also potentially improve the answer quality on several question categories in terms of diversity and relevance. SoT is an initial attempt at data-centric optimization for efficiency, and reveal the potential of pushing LLMs to think more like a human for answer quality.

Community

I could see this becoming a standard way that top-performing models improve latency. Mixture of Experts is an architecture that already starts to inject more and more "application architecture" into the neural network so there's no reason this method couldn't be added when that approach is already being used.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2307.15337 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.15337 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2307.15337 in a Space README.md to link it from this page.

Collections including this paper 5