victormiller commited on
Commit
b5333d3
1 Parent(s): 5e43c0c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -36
README.md CHANGED
@@ -191,19 +191,30 @@ We present CrystalChat, an instruction following model finetuned from [LLM360/Cr
191
 
192
  As always, the training data, training code, and metrics are publicly available.
193
 
194
- ## About LLM360
195
 
196
- LLM360 is an initiative for comprehensive and fully open-sourced LLMs,
197
- where all training details, model checkpoints, intermediate results, and
198
- additional analyses are made available to the community. Our goal is to advance
199
- the field by inviting the community to deepen the understanding of LLMs
200
- together. As the first step of the project LLM360, we release all intermediate
201
- model checkpoints, our fully-prepared pre-training dataset, all source code and
202
- configurations, and training details. We are
203
- committed to continually pushing the boundaries of LLMs through this open-source
204
- effort.
 
 
 
 
 
 
 
 
 
 
 
 
205
 
206
- Get access now at [LLM360 site](https://www.llm360.ai/)
207
 
208
  # Instruction Tuning Training
209
 
@@ -262,30 +273,6 @@ The instruction format is as follows:
262
 
263
  We will release the training code and the training data soon. Our training code is based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), with some modifications to support our training data format and Maximal Update Parametrization (μP).
264
 
265
- # CrystalChat Performance
266
-
267
- | Model | Trained Tokens | Avg. of Avg. | Language Avg. | Coding Avg. | ARC | HellaSwag | MMLU (5-shot) | GSM8K | Winogrande(5-shot) | TruthfulQA | HumanEval (pass@1) | MBPP (pass@1) |
268
- |:------------------------:|:--------------:|:------------:|:-------------:|:-----------:|:-----:|:---------:|:-------------:|:-----:|:------------------:|:----------:|:------------------:|:-------------:|
269
- | CrystalChat 7B | 1.275T | 44.96 | 53.29 | 36.62 | 51.71 | 76.12 | 53.22 | 28.05 | 70.64 | 47.29 | 34.12 | 39.11 |
270
- | Mistral-7B-Instruct-v0.1 | - | 44.34 | 54.86 | 30.62 | 58.05 | 75.71 | 55.56 | 32.00 | 74.27 | 55.90 | 29.27 | 31.96 |
271
- | CodeLlama-7b-Instruct | 2.5T | 40.91 | 45.29 | 36.52 | 43.35 | 66.14 | 42.75 | 15.92 | 64.33 | 39.23 | 34.12 | 38.91 |
272
- | Llama-2-7b-Chat | 2T | 34.11 | 52.86 | 15.35 | 53.07 | 78.39 | 48.42 | 18.88 | 73.09 | 45.30 | 13.26 | 17.43 |
273
- | AmberChat 7B | 1.25T | - | 44.76 | - | 42.83 | 74.03 | 38.88 | 5.31 | 66.77 | 40.72 | - | - |
274
-
275
-
276
-
277
- | Combined Language and Coding Ability |
278
- |------------------------------------------------|
279
- <img src="CC-Compare.jpg" alt="arc" width="800"/>
280
-
281
- | Performance on Standard Benchmarks |
282
- |------------------------------------------------|
283
- <img src="cc-eval-std-benchmarks.png" alt="std-bench" width="800"/>
284
-
285
- | Perforamnce on Language Benchmarks |
286
- |---------------------------------------------------------|
287
- <img src="cc-eval-lang-compare.png" alt="arc" width="800"/>
288
-
289
  ## Model Description
290
 
291
  - **Model type:** Language model with the same architecture as LLaMA-7B
@@ -369,4 +356,18 @@ CrystalChat has not been aligned to human preferences for safety within the RLHF
369
  archivePrefix={arXiv},
370
  primaryClass={cs.CL}
371
  }
372
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
191
 
192
  As always, the training data, training code, and metrics are publicly available.
193
 
194
+ # CrystalChat Performance
195
 
196
+ | Model | Trained Tokens | Avg. of Avg. | Language Avg. | Coding Avg. | ARC | HellaSwag | MMLU (5-shot) | GSM8K | Winogrande(5-shot) | TruthfulQA | HumanEval (pass@1) | MBPP (pass@1) |
197
+ |:------------------------:|:--------------:|:------------:|:-------------:|:-----------:|:-----:|:---------:|:-------------:|:-----:|:------------------:|:----------:|:------------------:|:-------------:|
198
+ | CrystalChat 7B | 1.275T | 44.96 | 53.29 | 36.62 | 51.71 | 76.12 | 53.22 | 28.05 | 70.64 | 47.29 | 34.12 | 39.11 |
199
+ | Mistral-7B-Instruct-v0.1 | - | 44.34 | 54.86 | 30.62 | 58.05 | 75.71 | 55.56 | 32.00 | 74.27 | 55.90 | 29.27 | 31.96 |
200
+ | CodeLlama-7b-Instruct | 2.5T | 40.91 | 45.29 | 36.52 | 43.35 | 66.14 | 42.75 | 15.92 | 64.33 | 39.23 | 34.12 | 38.91 |
201
+ | Llama-2-7b-Chat | 2T | 34.11 | 52.86 | 15.35 | 53.07 | 78.39 | 48.42 | 18.88 | 73.09 | 45.30 | 13.26 | 17.43 |
202
+ | AmberChat 7B | 1.25T | - | 44.76 | - | 42.83 | 74.03 | 38.88 | 5.31 | 66.77 | 40.72 | - | - |
203
+
204
+
205
+
206
+ | Combined Language and Coding Ability |
207
+ |------------------------------------------------|
208
+ <img src="CC-Compare.jpg" alt="arc" width="800"/>
209
+
210
+ | Performance on Standard Benchmarks |
211
+ |------------------------------------------------|
212
+ <img src="cc-eval-std-benchmarks.png" alt="std-bench" width="800"/>
213
+
214
+ | Perforamnce on Language Benchmarks |
215
+ |---------------------------------------------------------|
216
+ <img src="cc-eval-lang-compare.png" alt="arc" width="800"/>
217
 
 
218
 
219
  # Instruction Tuning Training
220
 
 
273
 
274
  We will release the training code and the training data soon. Our training code is based on [Megatron-LM](https://github.com/NVIDIA/Megatron-LM), with some modifications to support our training data format and Maximal Update Parametrization (μP).
275
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
276
  ## Model Description
277
 
278
  - **Model type:** Language model with the same architecture as LLaMA-7B
 
356
  archivePrefix={arXiv},
357
  primaryClass={cs.CL}
358
  }
359
+ ```
360
+
361
+ ## About LLM360
362
+
363
+ LLM360 is an initiative for comprehensive and fully open-sourced LLMs,
364
+ where all training details, model checkpoints, intermediate results, and
365
+ additional analyses are made available to the community. Our goal is to advance
366
+ the field by inviting the community to deepen the understanding of LLMs
367
+ together. As the first step of the project LLM360, we release all intermediate
368
+ model checkpoints, our fully-prepared pre-training dataset, all source code and
369
+ configurations, and training details. We are
370
+ committed to continually pushing the boundaries of LLMs through this open-source
371
+ effort.
372
+
373
+ [Visit Us](https://www.llm360.ai/)