|
--- |
|
base_model: |
|
- Riiid/sheep-duck-llama-2-13b |
|
- IkariDev/Athena-v4 |
|
- TheBloke/Llama-2-13B-fp16 |
|
- KoboldAI/LLaMA2-13B-Psyfighter2 |
|
- KoboldAI/LLaMA2-13B-Erebus-v3 |
|
- Henk717/echidna-tiefigther-25 |
|
- Undi95/Unholy-v2-13B |
|
- ddh0/EstopianOrcaMaid-13b |
|
tags: |
|
- mergekit |
|
- merge |
|
- not-for-all-audiences |
|
- ERP |
|
- RP |
|
- Roleplay |
|
- uncensored |
|
- GPTQ |
|
license: llama2 |
|
language: |
|
- en |
|
inference: false |
|
--- |
|
# Model |
|
This is the GPTQ 4bit quantized version of SnowyRP |
|
|
|
[BF16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B) |
|
|
|
[GPTQ](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GPTQ) |
|
|
|
[GGUF](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B-GGUF) |
|
|
|
Any Future Quantizations I am made aware of will be added. |
|
|
|
## Merge Details |
|
|
|
just used highly ranked modles to try and get a better result, Also I made sure that Model incest would not be a BIG problem by merging models that are pretty pure. |
|
|
|
These models CAN and WILL produce X rated or harmful content, due to being heavily uncensored in a attempt to not limit or make the model worse. |
|
|
|
This Model has a Very good knowledge base and understands anatomy decently, Plus this Model is VERY versitle and is great for General assistant work, RP and ERP, RPG RPs and much more. |
|
|
|
## Model Use: |
|
|
|
This model is very good... WITH THE RIGHT SETTINGS. |
|
I personally use microstat mixed with dynamic temp with epsion cut off and eta cut off. |
|
``` |
|
Optimal Settings (so far) |
|
|
|
Microstat Mode: 2 |
|
tau: 2.95 |
|
eta: 0.05 |
|
|
|
Dynamic Temp |
|
min: 0.25 |
|
max: 1.8 |
|
|
|
Cut offs |
|
epsilon: 3 |
|
eta: 3 |
|
``` |
|
Go to the [BF16](https://huggingface.co/Masterjp123/SnowyRP-FinalV1-L2-13B) Repo for more usage settings. |
|
### Merge Method |
|
|
|
This model was merged using the [ties](https://arxiv.org/abs/2306.01708) merge method using [TheBloke/Llama-2-13B-fp16](https://huggingface.co/TheBloke/Llama-2-13B-fp16) as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [Riiid/sheep-duck-llama-2-13b](https://huggingface.co/Riiid/sheep-duck-llama-2-13b) |
|
* [IkariDev/Athena-v4](https://huggingface.co/IkariDev/Athena-v4) |
|
* [KoboldAI/LLaMA2-13B-Psyfighter2](https://huggingface.co/KoboldAI/LLaMA2-13B-Psyfighter2) |
|
* [KoboldAI/LLaMA2-13B-Erebus-v3](https://huggingface.co/KoboldAI/LLaMA2-13B-Erebus-v3) |
|
* [Henk717/echidna-tiefigther-25](https://huggingface.co/Henk717/echidna-tiefigther-25) |
|
* [Undi95/Unholy-v2-13B](https://huggingface.co/Undi95/Unholy-v2-13B) |
|
* [EstopianOrcaMaid](https://huggingface.co/ddh0/EstopianOrcaMaid-13b) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
for P1 |
|
```yaml |
|
base_model: |
|
model: |
|
path: TheBloke/Llama-2-13B-fp16 |
|
dtype: bfloat16 |
|
merge_method: task_arithmetic |
|
slices: |
|
- sources: |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: TheBloke/Llama-2-13B-fp16 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: Undi95/Unholy-v2-13B |
|
parameters: |
|
weight: 1.0 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: Henk717/echidna-tiefigther-25 |
|
parameters: |
|
weight: 0.45 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: KoboldAI/LLaMA2-13B-Erebus-v3 |
|
parameters: |
|
weight: 0.33 |
|
``` |
|
|
|
for P2 |
|
```yaml |
|
base_model: |
|
model: |
|
path: TheBloke/Llama-2-13B-fp16 |
|
dtype: bfloat16 |
|
merge_method: task_arithmetic |
|
slices: |
|
- sources: |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: TheBloke/Llama-2-13B-fp16 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: KoboldAI/LLaMA2-13B-Psyfighter2 |
|
parameters: |
|
weight: 1.0 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: Riiid/sheep-duck-llama-2-13b |
|
parameters: |
|
weight: 0.45 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: IkariDev/Athena-v4 |
|
parameters: |
|
weight: 0.33 |
|
``` |
|
|
|
for the final merge |
|
```yaml |
|
base_model: |
|
model: |
|
path: TheBloke/Llama-2-13B-fp16 |
|
dtype: bfloat16 |
|
merge_method: ties |
|
parameters: |
|
int8_mask: 1.0 |
|
normalize: 1.0 |
|
slices: |
|
- sources: |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: ddh0/EstopianOrcaMaid-13b |
|
parameters: |
|
density: [1.0, 0.7, 0.1] |
|
weight: 1.0 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: Masterjp123/snowyrpp1 |
|
parameters: |
|
density: 0.5 |
|
weight: [0.0, 0.3, 0.7, 1.0] |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: Masterjp123/snowyrpp2 |
|
parameters: |
|
density: 0.33 |
|
weight: |
|
- filter: mlp |
|
value: 0.5 |
|
- value: 0.0 |
|
- layer_range: [0, 40] |
|
model: |
|
model: |
|
path: TheBloke/Llama-2-13B-fp16 |
|
``` |