qwp4w3hyb commited on
Commit
39f9c77
1 Parent(s): 65ce7a1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -22,9 +22,9 @@ tags:
22
 
23
  - Not supported in llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
24
  - quants & imatrix are still in the oven will follow soon TM
25
- # - quants done with an importance matrix for improved quantization loss
26
- # - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt)
27
- # - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S
28
  - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) WIP [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
29
  - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
30
  ```
 
22
 
23
  - Not supported in llama.cpp master; Requires the latest version of the phi3 128k [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
24
  - quants & imatrix are still in the oven will follow soon TM
25
+ <!-- - quants done with an importance matrix for improved quantization loss -->
26
+ <!-- - gguf & imatrix generated from bf16 for "optimal" accuracy loss (some say this is snake oil, but it can't hurt) -->
27
+ <!-- - Wide coverage of different gguf quant types from Q\_8\_0 down to IQ1\_S -->
28
  - Quantized with [llama.cpp](https://github.com/ggerganov/llama.cpp) WIP [branch](https://github.com/ggerganov/llama.cpp/pull/7225)
29
  - Imatrix generated with [this](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) multi-purpose dataset.
30
  ```