Text Generation
Transformers
Safetensors
GGUF
English
stablelm
causal-lm
code
Eval Results
Inference Endpoints
7 papers

is it trained on 1.3t or 4t tokens?

#8
by stormchaser - opened

the blog https://stability.ai/news/stable-code-2024-llm-code-completion-release says its trained on 4t tokens, but in this model card it says 1.3t tokens, what am i reading wrong?

Stability AI org

It is continued pre-trained for 1.3T tokens from stabilityai/stablelm-3b-4e1t which was pretrained for 4T tokens.

so then its like 5.3 trillion tokens now?

Sign up or log in to comment