Eigeen's picture
Create README.md
13cc711
---
datasets:
- Oniichat/bluemoon_roleplay_chat_data_300k_messages
inference: false
language:
- en
license: llama2
model_creator: PygmalionAI
model_link: https://huggingface.co/PygmalionAI/mythalion-13b
model_name: mythalion-13b
model_type: llama
pipeline_tag: text-generation
quantized_by: Eigeen
tags:
- text generation
- instruct
thumbnail: null
---
# Mythalion 13B - ExLlamaV2
Original model: [mythalion-13b](https://huggingface.co/PygmalionAI/mythalion-13b)
# Description
This is my trial of quantization. I use only RP dataset for calibration, it may cause the model to not perform as well in other situations. But people who use Mythalion basically use it for RP, I guess?
Anyway, it works on RP. I haven't tested it's performance in other situations. ExLlamaV2 is great.
2.30 bpw is designed for 8GB VRAM. It is more extreme and can only up to 2048 context. If your VRAM is occupied by other program or system, lower the allowed context.
I wouldn't use it though because its performance so poor compared to 4 and 6bpw. It's just can work. I shared it and maybe someone needed it.