|
--- |
|
license: creativeml-openrail-m |
|
language: |
|
- en |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
|
|
## Model Details |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
|
|
|
|
- **Developed by:** nRuaif |
|
- **Model type:** large language model |
|
- **License:** |
|
- **Finetuned from model [optional]:** Llama-13B |
|
### Model Sources [optional] |
|
|
|
|
|
## Uses |
|
|
|
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> |
|
The model uses Fastchat/ShareGPT format but anything would works fine |
|
|
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
This model is finetuned for normal and erotic roleplay while can still an assistant. (Might not be a helpfull one through) |
|
|
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> |
|
Do anything you want. I don't care |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
Model might have bias to NSFW due to the large % of NSFW data in the training set. |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
## Training Details |
|
|
|
### Training Data |
|
|
|
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> |
|
|
|
|
|
3000 convos with 4090 cut off len. |
|
|
|
### Training Procedure |
|
|
|
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> |
|
|
|
|
|
|
|
|
|
#### Training Hyperparameters |
|
|
|
- **Training regime:** BF16, QLoRA, constant LR 5e-5 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
### Compute Infrastructure |
|
|
|
The model is trained on 1 A100 for 2 hours on runpod. |
|
|
|
|
|
|