openllama-7b-icl / README.md
itsliupeng's picture
Update README.md
e61d749 verified
metadata
license: apache-2.0

Train openllama-7b with in-context leanrning

A Reproduction of OpenLLaMA using 128 H100 GPUs in Bfloat16.

The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.

The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.

The sole distinction from the OpenLLaMA 7B Base lies in the organization of Falcon documents, which follows the methodology outlined in this arXiv paper.

image/png