Update README.md
Browse files
README.md
CHANGED
@@ -22,7 +22,7 @@ base_model: google/gemma-2-9b-it
|
|
22 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
23 |
- experimental custom quant types
|
24 |
- `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
|
25 |
-
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5fac350b9cc49d0446fc291b9c4ad53666c77591](https://github.com/ggerganov/llama.cpp/
|
26 |
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
|
27 |
```
|
28 |
./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
|
|
|
22 |
- Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
|
23 |
- experimental custom quant types
|
24 |
- `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
|
25 |
+
- Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) commit [5fac350b9cc49d0446fc291b9c4ad53666c77591](https://github.com/ggerganov/llama.cpp/commit/5fac350b9cc49d0446fc291b9c4ad53666c77591) (master from 2024-07-02)
|
26 |
- Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
|
27 |
```
|
28 |
./imatrix -m $model_name-bf16.gguf -f calibration_datav3.txt -o $model_name.imatrix
|