Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: mrl
|
4 |
+
license_link: https://mistral.ai/licenses/MRL-0.1.md
|
5 |
+
extra_gated_description: If you want to learn more about how we process your personal
|
6 |
+
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
|
7 |
+
---
|
8 |
+
# Mistral-Small-22B-ArliAI-RPMax-v1.1
|
9 |
+
=====================================
|
10 |
+
|
11 |
+
## RPMax Series Overview
|
12 |
+
|
13 |
+
| [2B](https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1) |
|
14 |
+
[3.8B](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) |
|
15 |
+
[8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) |
|
16 |
+
[9B](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1) |
|
17 |
+
[12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) |
|
18 |
+
[20B](https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1) |
|
19 |
+
[22B](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) |
|
20 |
+
[70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1) |
|
21 |
+
|
22 |
+
RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
|
23 |
+
|
24 |
+
Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
|
25 |
+
|
26 |
+
You can access the models at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
|
27 |
+
|
28 |
+
We also have a models ranking page at https://www.arliai.com/models-ranking
|
29 |
+
|
30 |
+
Ask questions in our new Discord Server! https://discord.gg/aDVx6FZN
|
31 |
+
|
32 |
+
## Model Description
|
33 |
+
|
34 |
+
Mistral-Small-22B-ArliAI-RPMax-v1.1 is a variant based on mistralai/Mistral-Small-Instruct-2409 which has a restrictive mistral license. So this is for your personal use only.
|
35 |
+
|
36 |
+
### Training Details
|
37 |
+
|
38 |
+
* **Sequence Length**: 8192
|
39 |
+
* **Training Duration**: Approximately 4 days on 2x3090Ti
|
40 |
+
* **Epochs**: 1 epoch training for minimized repetition sickness
|
41 |
+
* **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights
|
42 |
+
* **Learning Rate**: 0.00001
|
43 |
+
* **Gradient accumulation**: Very low 32 for better learning.
|
44 |
+
|
45 |
+
## Suggested Prompt Format
|
46 |
+
|
47 |
+
Mistral Instruct Format
|