Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: yi-license
|
4 |
+
license_link: LICENSE
|
5 |
+
datasets:
|
6 |
+
- jondurbin/airoboros-3.1
|
7 |
+
---
|
8 |
+
|
9 |
+
|
10 |
+
|
11 |
+
# Instruction tune of Yi-34b-200k with Airoboros-3.1 (fp16)
|
12 |
+
|
13 |
+
|
14 |
+
## Overview
|
15 |
+
|
16 |
+
This is [larryvrh/Yi-34B-200K-Llamafied](https://huggingface.co/larryvrh/Yi-34B-200K-Llamafied), with instruction tuning performed with Jon Durbin's [jondurbin/airoboros-3.1](https://huggingface.co/datasets/jondurbin/airoboros-3.1) dataset. That base model is [01-ai/Yi-34B-200k](https://huggingface.co/01-ai/Yi-34B-200k), but using llama2 model definitions and tokenizer to remove any remote code requirements.
|
17 |
+
|
18 |
+
**This is a (merged) QLoRA fine-tune (rank 64)**.
|
19 |
+
|
20 |
+
The finetune was performed with 1x RTX 6000 Ada (~80 hours to this checkpoint). Prompts were truncated to 4096 tokens (for speed and VRAM headroom).
|
21 |
+
|
22 |
+
I have done very little testing with this model, so feedback on real world performance is appreciated!
|
23 |
+
|
24 |
+
## How to Use
|
25 |
+
|
26 |
+
Use as you would any other Hugging Face fp16 llama-2 model.
|
27 |
+
|
28 |
+
## Prompting:
|
29 |
+
|
30 |
+
Model was trained with llama-2 chat prompt format. See [jondurbin/airoboros-l2-13b-3.1.1](https://huggingface.co/jondurbin/airoboros-l2-13b-3.1.1) model card for details.
|
31 |
+
|