Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: peft
|
3 |
---
|
|
|
1 |
+
# Llama-2-13b SuperCOT lora checkpoints
|
2 |
+
|
3 |
+
These are my Llama-2-13b SuperCOT Loras trained using QLora on the [SuperCOT Dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset).
|
4 |
+
|
5 |
+
### Architecture
|
6 |
+
|
7 |
+
- **Model Architecture**: Llama-2-13b
|
8 |
+
- **Training Algorithm**: QLora
|
9 |
+
|
10 |
+
### Training Details
|
11 |
+
|
12 |
+
- **Dataset**: [SuperCOT Dataset](https://huggingface.co/datasets/kaiokendev/SuperCOT-dataset)
|
13 |
+
- **Datset type**: alpaca
|
14 |
+
- **Training Parameters**: [See Here](https://github.com/OpenAccess-AI-Collective/axolotl/blob/main/examples/llama-2/qlora.yml)
|
15 |
+
- **Training Environment**: Axolotl
|
16 |
+
- **sequence_len**: 4096
|
17 |
+
|
18 |
+
## Acknowledgments
|
19 |
+
|
20 |
+
Special thanks to the creators of the datasets in SuperCOT. Additionally, thanks to Kaiokendev for curating the SuperCOT dataset. Thanks to the contributors of the Axolotl.
|
21 |
+
|
22 |
+
|
23 |
+
## Stuff generated from axolotl:
|
24 |
+
|
25 |
---
|
26 |
library_name: peft
|
27 |
---
|