qwp4w3hyb commited on
Commit
2e77f43
1 Parent(s): fdb7e1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -21,7 +21,6 @@ base_model: google/gemma-2-9b-it
21
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
22
  - experimental custom quant types
23
  - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
24
- - `_XL` with `--output-tensor-type bf16 --token-embedding-type bf16` (same size as _L, in theory even closer to the source model)
25
  - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) release [b3259](https://github.com/ggerganov/llama.cpp/releases/tag/b3259)
26
  - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
27
  ```
 
21
  - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
22
  - experimental custom quant types
23
  - `_L` with `--output-tensor-type f16 --token-embedding-type f16` (same as bartowski's)
 
24
  - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) release [b3259](https://github.com/ggerganov/llama.cpp/releases/tag/b3259)
25
  - Imatrix generated with [this](https://gist.github.com/bartowski1182/eb213dccb3571f863da82e99418f81e8) multi-purpose dataset by [bartowski](https://huggingface.co/bartowski).
26
  ```