|
--- |
|
license: cc-by-nc-4.0 |
|
base_model: |
|
- Alsebay/NarumashiRTS-V2 |
|
- SanjiWatsuki/Kunoichi-DPO-v2-7B |
|
- Nitral-AI/KukulStanta-7B |
|
library_name: transformers |
|
tags: |
|
- moe |
|
- merge |
|
- roleplay |
|
- Roleplay |
|
- 4-bit |
|
- AWQ |
|
- text-generation |
|
- autotrain_compatible |
|
- endpoints_compatible |
|
pipeline_tag: text-generation |
|
inference: false |
|
quantized_by: Suparious |
|
--- |
|
# Alsebay/NaruMOE-3x7B-v2 AWQ |
|
|
|
- Model creator: [Alsebay](https://huggingface.co/Alsebay) |
|
- Original model: [NaruMOE-3x7B-v2](https://huggingface.co/Alsebay/NaruMOE-3x7B-v2) |
|
|
|
## Model Summary |
|
|
|
A MoE model for Roleplaying. Since 7B model is small enough, we can combine them to a bigger model (Which CAN be smarter). |
|
|
|
Adapte (some limited) TSF (Trans Sexual Fiction) content because I have include my pre-train model in. |
|
|
|
Worse than V1 in logic, but better in expression. |
|
|