Text Generation
Transformers
PyTorch
English
gptj
Inference Endpoints

Model behavior during adaptation phase

#24
by jlli - opened

Hi GPT-JT team,

I'm trying to apply UL2 Adaptation to a 1.5B GPT model, and I'm noticing that in the first few steps of UL2 training, model performance on few-shot evaluation decreases significantly before slowly increasing again (for example, see the attached figure). Do you guys have observations on how the model performance changes over the course of UL2 training? Thanks!

Evaluation details:
We use the Eleuther-AI LM evaluation harness (https://github.com/EleutherAI/lm-evaluation-harness), using 3-shot prompts.

Few-shot avg vs. UL2 steps.png

Together org

Hi @jlli Regarding UL2 adaptation, it's common to see a decrease in model performance in the initial stages of training (since the model changed), before slowly increasing again. We have also observed that larger models tend to yield better results with UL2 adaptation. Additionally, the duration of the initial performance drop and the rate of subsequent improvement can vary depending on various factors such as the size and quality of the fine-tuning data, the hyperparameters used for training, and so on.

Sign up or log in to comment