File size: 1,670 Bytes
532644e f192bd9 532644e 000e1d2 532644e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- pytorch
- mistral
- finetuned
- not-for-all-audiences
---
This is an ExLlamaV2 quantized model in 4bpw of [KoboldAI/Mistral-7B-Erebus-v3](https://huggingface.co/KoboldAI/Mistral-7B-Erebus-v3) using the default calibration dataset.
# Original Model card:
# Mistral-7B-Erebus
## Model description
This is the third generation of the original Shinen made by Mr. Seeker. The full dataset consists of 8 different sources, all surrounding the "Adult" theme. The name "Erebus" comes from the greek mythology, also named "darkness". This is in line with Shin'en, or "deep abyss". For inquiries, please contact the KoboldAI community. **Warning: THIS model is NOT suitable for use by minors. The model will output X-rated content.**
## Training procedure
Mistral-7B-Erebus was trained on 8x A6000 Ada GPU's for a single epoch. No special frameworks have been used.
## Training data
The data can be divided in 8 different datasets:
- Literotica (everything with 3.0/5 or higher)
- Sexstories (everything with 70 or higher)
- Dataset-G (private dataset of X-rated stories)
- Doc's Lab (all stories)
- Lushstories (Editor's pick)
- Swinglifestyle (all stories)
- Pike-v2 Dataset (novels with "adult" rating)
- SoFurry (collection of various animals)
The dataset uses `[Genre: <comma-separated list of genres>]` for tagging.
The full dataset is 2.3B tokens in size.
## Limitations and biases
Based on known problems with NLP technology, potential relevant factors include bias (gender, profession, race and religion). **Warning: This model has a very strong NSFW bias!**
|