Eigeen's picture
Create README.md
73c9917
|
raw
history blame
951 Bytes
metadata
datasets:
  - Oniichat/bluemoon_roleplay_chat_data_300k_messages
inference: false
language:
  - en
license: llama2
model_creator: nRuaif
model_link: https://huggingface.co/nRuaif/Mythalion-Kimiko-v2
model_name: Mythalion-Kimiko-v2
model_type: llama
pipeline_tag: text-generation
quantized_by: Eigeen
tags:
  - text generation
  - instruct
thumbnail: null

Mythalion 13B Kimiko-v2 - ExLlamaV2

Original model: Mythalion-Kimiko-v2

Description

This is my trial of quantization. I use only RP dataset for calibration, it may cause the model to not perform as well in other situations. But people who use Mythalion basically use it for RP, I guess?

Anyway, it works well on RP. I haven't tested it's performance in other situations. ExLlamaV2 is great.

6.05 bpw is designed for 16GB VRAM. If you have 24GB VRAM, you can expand the context to at least 8192. I did not calculate the exact values.