Lewdiculous's picture
Update README.md
43a4a0c verified
|
raw
history blame
1.02 kB
metadata
license: cc-by-nc-4.0

Simple python script (gguf-imat.py) to generate various GGUF-Imatrix quantizations from a Hugging Face author/model input, for Windows and NVIDIA hardware.

This is setup for a Windows machine with 8GB of VRAM, assuming use with an NVIDIA GPU. If you want to change the the -ngl (number of GPU layers) amount, you can do so at line 120. This is only relevant during the --imatrix data generation. If you don't have enough VRAM you can decrease the -ngl amount or set it to 0 to only use your System RAM instead for all layers.

Your imatrix.txt is expected to be located inside the imatrix folder. Included file is considered a good option, this discussion is where it came from.

Adjust quantization_options in line 133.

Requirements:

  • Python 3.11
    • pip install huggingface_hub

If this proves useful for you, feel free to credit and share the repository.