Papers
arxiv:2406.10209

Be like a Goldfish, Don't Memorize! Mitigating Memorization in Generative LLMs

Published on Jun 14
· Submitted by ahans1 on Jun 17
Authors:
,
,
,
,
,
,
,
,

Abstract

Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss. During training, a randomly sampled subset of tokens are excluded from the loss computation. These dropped tokens are not memorized by the model, which prevents verbatim reproduction of a complete chain of tokens from the training set. We run extensive experiments training billion-scale Llama-2 models, both pre-trained and trained from scratch, and demonstrate significant reductions in extractable memorization with little to no impact on downstream benchmarks.

Community

Paper author Paper submitter

Large language models can memorize and repeat their training data, causing privacy and copyright risks. To mitigate memorization, we introduce a subtle modification to the next-token training objective that we call the goldfish loss.

  1. Do next token prediction
  2. Drop pseudorandom tokens from your loss comp
  3. ????
  4. Profits i.e. mitigate training data regurgitation

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.10209 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.10209 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.10209 in a Space README.md to link it from this page.

Collections including this paper 3