Pythia 1.4b Deduped with 8k Context Window

This model fine-tunes Pythia 1.4b model with a context window of 8k tokens. With optimizations like Flash Attention & bitsandbytes, I could fit the model the entire model with a batch size of 1, on a single A100 (40 GB). The fine-tuning took ~30 hours, after which the loss was similar to that of fine-tuning at the context window of 2k tokens.

Downloads last month
15
Safetensors
Model size
1.52B params
Tensor type
FP16
·
BOOL
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Dataset used to train naxautify/pythia-1.4b-deduped-8k