mo137's picture
Update README.md
f88ef2a
metadata
license: cc-by-nc-4.0
tags:
  - exllamav2
  - exl2
  - Text Generation
  - not-for-all-audiences
  - nsfw
  - Transformers
  - llama
  - text-generation-inference

Amethyst 13B Mistral - EXL2 - 2.2 bpw

Description

  • 2.2 bits per weight.
  • I think it's not very usable, seems rather nonsensical compared to 3 bpw.
  • I don't think exllamav2's current conversion script is able to convert to anything below ~2.18 bpw, at least not with the methods I tried.

I converted the model using the convert.py script from the exllamav2 repo:
https://github.com/turboderp/exllamav2
Its documentation:
https://github.com/turboderp/exllamav2/blob/master/doc/convert.md

I used the WikiText-2-v1 dataset for calibration:
https://huggingface.co/datasets/wikitext/blob/refs%2Fconvert%2Fparquet/wikitext-2-v1/test/0000.parquet