Papers
arxiv:2406.11813

How Do Large Language Models Acquire Factual Knowledge During Pretraining?

Published on Jun 17
· Submitted by philschmid on Jun 18

Abstract

Despite the recent observation that large language models (LLMs) can store substantial factual knowledge, there is a limited understanding of the mechanisms of how they acquire factual knowledge through pretraining. This work addresses this gap by studying how LLMs acquire factual knowledge during pretraining. The findings reveal several important insights into the dynamics of factual knowledge acquisition during pretraining. First, counterintuitively, we observe that pretraining on more data shows no significant improvement in the model's capability to acquire and maintain factual knowledge. Next, there is a power-law relationship between training steps and forgetting of memorization and generalization of factual knowledge, and LLMs trained with duplicated training data exhibit faster forgetting. Third, training LLMs with larger batch sizes can enhance the models' robustness to forgetting. Overall, our observations suggest that factual knowledge acquisition in LLM pretraining occurs by progressively increasing the probability of factual knowledge presented in the pretraining data at each step. However, this increase is diluted by subsequent forgetting. Based on this interpretation, we demonstrate that we can provide plausible explanations for recently observed behaviors of LLMs, such as the poor performance of LLMs on long-tail knowledge and the benefits of deduplicating the pretraining corpus.

Community

Paper submitter
  • LLMs learn facts by encountering them multiple times during training (different sources).
  • LLMs forget faster with exact data repetitions, using deduplicated data helps retain knowledge.
  • Adding more data doesn't significantly improve how well LLMs learn facts.
  • Using larger batches of data during training helps LLMs remember facts better.
  • Experiments on 1 billion (1B) and 7 billion (7B) parameter models show that larger models remember and generalize facts better.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.11813 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.11813 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.11813 in a Space README.md to link it from this page.

Collections including this paper 16