Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,38 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
# ArliAI-RPMax-12B-v1.1
|
5 |
+
=====================================
|
6 |
+
|
7 |
+
## Overview
|
8 |
+
|
9 |
+
This repository is based on the Mistral-Nemo-Base-2407 model and is governed by the Apache 2.0 License agreement: https://huggingface.co/mistralai/Mistral-Nemo-Base-2407
|
10 |
+
|
11 |
+
## Model Description
|
12 |
+
|
13 |
+
ArliAI-RPMax-12B-v1.1 is trained on a diverse set of curated RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive, with a unique approach to training that minimizes repetition.
|
14 |
+
|
15 |
+
You can access the model at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
|
16 |
+
|
17 |
+
### Training Details
|
18 |
+
|
19 |
+
* **Sequence Length**: 8192
|
20 |
+
* **Training Duration**: Approximately 2 days on 2x3090Ti
|
21 |
+
* **Epochs**: 1 epoch training for minimized repetition sickness
|
22 |
+
* **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights
|
23 |
+
* **Learning Rate**: 0.00001
|
24 |
+
* **Gradient accumulation**: Very low 32 for better learning.
|
25 |
+
|
26 |
+
## Quantization
|
27 |
+
|
28 |
+
The model is available in quantized formats:
|
29 |
+
|
30 |
+
* **FP16**: https://huggingface.co/ArliAI/ArliAI-RPMax-12B-v1.1
|
31 |
+
* **GPTQ_Q4**: https://huggingface.co/ArliAI/ArliAI-RPMax-12B-v1.1-GPTQ_Q4
|
32 |
+
* **GPTQ_Q8**: https://huggingface.co/ArliAI/ArliAI-RPMax-12B-v1.1-GPTQ_Q8
|
33 |
+
* **GGUF**: https://huggingface.co/ArliAI/ArliAI-RPMax-12B-v1.1-GGUF
|
34 |
+
|
35 |
+
|
36 |
+
## Suggested Prompt Format
|
37 |
+
|
38 |
+
Mistral Instruct Prompt Format
|