Update README.md
Browse files
README.md
CHANGED
@@ -3,6 +3,8 @@ datasets:
|
|
3 |
- jondurbin/airoboros-gpt4-1.4.1
|
4 |
---
|
5 |
|
|
|
|
|
6 |
Mostly untested!
|
7 |
|
8 |
Find GPTQ quantized weights and full model card here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ
|
|
|
3 |
- jondurbin/airoboros-gpt4-1.4.1
|
4 |
---
|
5 |
|
6 |
+
NOTE: This LoRA was trained on Llama-30b AFTER additional pretraining. I intend on providing the LoRA of that pretraining too. Applying this LoRA to base Llama-30b will likely result in a performance reduction. I have uploaded the fp16 merged weights [here](https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-LoRA/)
|
7 |
+
|
8 |
Mostly untested!
|
9 |
|
10 |
Find GPTQ quantized weights and full model card here: https://huggingface.co/bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-GPTQ
|