Papers
arxiv:2308.03421

RecycleGPT: An Autoregressive Language Model with Recyclable Module

Published on Aug 7, 2023
· Featured in Daily Papers on Aug 8, 2023
Authors:
,
,
,
,
,

Abstract

Existing large language models have to run K times to generate a sequence of K tokens. In this paper, we present RecycleGPT, a generative language model with fast decoding speed by recycling pre-generated model states without running the whole model in multiple steps. Our approach relies on the observation that adjacent tokens in a sequence usually have strong correlations and the next token in a sequence can be reasonably guessed or inferred based on the preceding ones. Through theoretical evaluations and practical tests on downstream text generation tasks, we demonstrate the effectiveness of our approach in lowering inference latency, achieving up to 1.4x speedup while preserving high performance.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2308.03421 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2308.03421 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2308.03421 in a Space README.md to link it from this page.

Collections including this paper 2