Update README.md
Browse files
README.md
CHANGED
@@ -3,7 +3,7 @@ license: apache-2.0
|
|
3 |
---
|
4 |
FP16 model merge of airoboros 70b 1.4.1 (https://huggingface.co/jondurbin/airoboros-l2-70b-gpt4-1.4.1) and limarpv3-llama2-70b-qlora (https://huggingface.co/Doctor-Shotgun/limarpv3-llama2-70b-qlora).
|
5 |
|
6 |
-
|
7 |
|
8 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
9 |
# limarpv3-llama2-70b-qlora
|
@@ -120,7 +120,7 @@ The following hyperparameters were used during training:
|
|
120 |
- Datasets 2.14.5
|
121 |
- Tokenizers 0.14.1
|
122 |
|
123 |
-
|
124 |
|
125 |
### Overview
|
126 |
|
|
|
3 |
---
|
4 |
FP16 model merge of airoboros 70b 1.4.1 (https://huggingface.co/jondurbin/airoboros-l2-70b-gpt4-1.4.1) and limarpv3-llama2-70b-qlora (https://huggingface.co/Doctor-Shotgun/limarpv3-llama2-70b-qlora).
|
5 |
|
6 |
+
# Original LoRA card:
|
7 |
|
8 |
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
9 |
# limarpv3-llama2-70b-qlora
|
|
|
120 |
- Datasets 2.14.5
|
121 |
- Tokenizers 0.14.1
|
122 |
|
123 |
+
# Original model card
|
124 |
|
125 |
### Overview
|
126 |
|