|
--- |
|
license: cc-by-nc-2.0 |
|
language: en |
|
--- |
|
|
|
These are GGUF quantized versions of [lizpreciatior/lzlv_70b_fp16_hf](https://huggingface.co/lizpreciatior/lzlv_70b_fp16_hf). |
|
|
|
The importance matrix was trained for 100K tokens (200 batches of 512 tokens) using `wiki.train.raw`. |
|
|
|
The IQ2_XXS and IQ2_XS versions are compatible with llama.cpp, version `147b17a` or later. The IQ3_XXS requires version `f4d7e54` or later. |
|
|
|
Some model files above 50GB are split into smaller files. To concatenate them, use the `cat` command (on Windows, use PowerShell): `cat foo-Q6_K.gguf.* > foo-Q6_K.gguf` |