Papers
arxiv:2012.13255

Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning

Published on Dec 22, 2020
Authors:
,
,

Abstract

Although pretrained language models can be fine-tuned to produce state-of-the-art results for a very wide range of language understanding tasks, the dynamics of this process are not well understood, especially in the low data regime. Why can we use relatively vanilla gradient descent algorithms (e.g., without strong regularization) to tune a model with hundreds of millions of parameters on datasets with only hundreds or thousands of labeled examples? In this paper, we argue that analyzing fine-tuning through the lens of intrinsic dimension provides us with empirical and theoretical intuitions to explain this remarkable phenomenon. We empirically show that common pre-trained models have a very low intrinsic dimension; in other words, there exists a low dimension reparameterization that is as effective for fine-tuning as the full parameter space. For example, by optimizing only 200 trainable parameters randomly projected back into the full space, we can tune a RoBERTa model to achieve 90\% of the full parameter performance levels on MRPC. Furthermore, we empirically show that pre-training implicitly minimizes intrinsic dimension and, perhaps surprisingly, larger models tend to have lower intrinsic dimension after a fixed number of pre-training updates, at least in part explaining their extreme effectiveness. Lastly, we connect intrinsic dimensionality with low dimensional task representations and compression based generalization bounds to provide intrinsic-dimension-based generalization bounds that are independent of the full parameter count.

Community

This is the foundation of modern time fine-tuning & quantization, and the experimental discovery here closely relate to theory of mind. Essentially with more paramters, model will be more reluctant to learn with high intrinsic dimensional, as then a lower dimensional representation suffices with the shear amount of parameters. As a result, model is always trying to minimize 'surprise' while reducing 'energy' -- in this case, the intrinsic dimension.
This means a low-dimensional projectional adaptor + few thousand examples suffice to achieve good fine-tuning result, and PEFT has a solid basement to build upon. The more recent bitNet also leverage this insight, and confidently scale up the 1-bit parameter model, and meeting float16 parameter model in performance, since the intrinsic dimension is then lower with bigger scale, making a 1-bit quantization enough for such representation. Or, in another word, as the model scales up more and more, the precision level from the weight parameter also becomes more and more redundant.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2012.13255 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2012.13255 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2012.13255 in a Space README.md to link it from this page.

Collections including this paper 1