Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,17 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- jondurbin/airoboros-2.1
|
4 |
+
---
|
5 |
+
|
6 |
+
# Extended Context (via YaRN) Finetune of Llama-2-13b with airoboros-2.1 (LoRA)
|
7 |
+
|
8 |
+
|
9 |
+
## Overview
|
10 |
+
|
11 |
+
This is a finetune of [NousResearch/Yarn-Llama-2-13b-64k](https://huggingface.co/NousResearch/Yarn-Llama-2-13b-64k). This starting point is Llama-2-13b with additional pretraining done with YaRN scaling applied to RoPE to extend the useful context length to 64k tokens. Starting with this model, I performed instruction tuning with [Jon Durbin's Airoboros 2.1 dataset](https://huggingface.co/datasets/jondurbin/airoboros-2.1), with same scaling approach applied.
|
12 |
+
|
13 |
+
**This is a (merged) QLoRA fine-tune (rank 64)**.
|
14 |
+
|
15 |
+
The finetune was performed with 1x RTX 6000 Ada (~18 hours).
|
16 |
+
|
17 |
+
For full model card, including benchmarks, see the model card of the fp16 [merged model](https://huggingface.co/bhenrym14/airoboros-l2-13b-2.1-YaRN-64k)
|