mo137's picture
Update README.md
aa4140b
---
license: cc-by-nc-4.0
tags:
- exllamav2
- exl2
- Text Generation
- not-for-all-audiences
- nsfw
- Transformers
- llama
- text-generation-inference
---
# Amethyst 13B Mistral - EXL2 - 6 bpw
- Model creator: [Undi](https://huggingface.co/Undi95)
- Original model: [Amethyst 13B Mistral](https://huggingface.co/Undi95/Amethyst-13B-Mistral)
## Description
- 6 bits per weight.
I converted the model using the convert.py script from the exllamav2 repo:
https://github.com/turboderp/exllamav2
Its documentation:
https://github.com/turboderp/exllamav2/blob/master/doc/convert.md
I used the WikiText-2-v1 dataset for calibration:
https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet