Continued Pretraining - Code (StarCoder-Python) Collection This collection contains checkpoints and LoRA adapters for Llama-2-7B trained on the Python subset of StarCoder for up to 20 billion tokens. • 4 items • Updated Sep 27
Continued Pretraining - Math (OpenWebMath) Collection Model and LoRA adapter checkpoints for Llama-2-7B trained on OpenWebMath for up to 20 billion tokens • 4 items • Updated Sep 27