Update README.md
Browse files
README.md
CHANGED
@@ -21,8 +21,6 @@ tags:
|
|
21 |
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
|
22 |
---
|
23 |
|
24 |
-
## Still uploading and quantizing, quants will appear 1by1 as they become available.
|
25 |
-
|
26 |
# Quant Infos
|
27 |
|
28 |
- Requires latest master + [Rope Scaling PR](https://github.com/ggerganov/llama.cpp/pull/8676)
|
@@ -31,7 +29,6 @@ base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
|
|
31 |
- quants done with an importance matrix for improved quantization loss
|
32 |
- Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
|
33 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
34 |
-
- still WIP
|
35 |
- experimental custom quant types
|
36 |
- `_L` with `--output-tensor-type f16 --token-embedding-type f16`, which supposedly leads to better accuracy.
|
37 |
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
|
|
|
21 |
base_model: meta-llama/Meta-Llama-3.1-70B-Instruct
|
22 |
---
|
23 |
|
|
|
|
|
24 |
# Quant Infos
|
25 |
|
26 |
- Requires latest master + [Rope Scaling PR](https://github.com/ggerganov/llama.cpp/pull/8676)
|
|
|
29 |
- quants done with an importance matrix for improved quantization loss
|
30 |
- Quantized ggufs & imatrix from hf bf16, through bf16. `safetensors bf16 -> gguf bf16 -> quant` for *optimal* quant loss.
|
31 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
|
|
32 |
- experimental custom quant types
|
33 |
- `_L` with `--output-tensor-type f16 --token-embedding-type f16`, which supposedly leads to better accuracy.
|
34 |
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
|