|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- bigcode/starcoderdata |
|
- tiiuae/falcon-refinedweb |
|
language: |
|
- en |
|
--- |
|
|
|
A Reproduction of [OpenLLaMA](https://github.com/openlm-research/open_llama) using 128 H100 GPUs in Bfloat16. |
|
|
|
|
|
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens. |
|
|
|
|
|
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size. |
|
|
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/643fb889b9ba82afb66d6b36/7nSjTJNB7qjwIa74Rr6kd.png) |
|
|