Commit
•
d6317fe
1
Parent(s):
212d4f4
Update README.md
Browse files
README.md
CHANGED
@@ -1,4 +1,14 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
-
Train openllama-7b with in-context leanrning
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
---
|
4 |
+
Train openllama-7b with [in-context leanrning](https://arxiv.org/abs/2310.10638)
|
5 |
+
|
6 |
+
|
7 |
+
A Reproduction of OpenLLaMA using 128 H100 GPUs in Bfloat16.
|
8 |
+
|
9 |
+
The pretrain data consists of Falcon, Starcoder, and the wikipedia, arxiv, books, stackexchange from RedPajama. In total, this encompassed nearly 1 trillion tokens.
|
10 |
+
|
11 |
+
The model was trained over a single epoch, incorporating 2000 warm-up steps and a cosine learning rate schedule, starting at 3e-5 with 4M batch size.
|
12 |
+
|
13 |
+
|
14 |
+
The sole distinction from the [OpenLLaMA 7B Base](https://huggingface.co/itsliupeng/openllama-7b-base) lies in the organization of Falcon documents, which follows the methodology outlined in this [arXiv paper](https://arxiv.org/abs/2310.10638).
|