Update README.md
Browse files
README.md
CHANGED
@@ -181,12 +181,6 @@ The dataset is comprised of a mixture of open datasets large-scale datasets avai
|
|
181 |
| meta-llama/Llama-2-13b-chat-hf | 13B | 54.92 | 59.04 | 81.94 | 54.64 | 41.12 | 74.51 | 15.24 |
|
182 |
|
183 |
|
184 |
-
### Training Infrastructure
|
185 |
-
|
186 |
-
TODO: Fix this
|
187 |
-
* **Hardware**: `StableLM 2 12B Chat` was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes.
|
188 |
-
* **Code Base**: We use our internal script for SFT training and [HuggingFace Alignment Handbook](https://github.com/huggingface/alignment-handbook) for DPO training.
|
189 |
-
|
190 |
## Use and Limitations
|
191 |
|
192 |
### Intended Use
|
|
|
181 |
| meta-llama/Llama-2-13b-chat-hf | 13B | 54.92 | 59.04 | 81.94 | 54.64 | 41.12 | 74.51 | 15.24 |
|
182 |
|
183 |
|
|
|
|
|
|
|
|
|
|
|
|
|
184 |
## Use and Limitations
|
185 |
|
186 |
### Intended Use
|