ZeroWw commited on
Commit
a7a786f
1 Parent(s): b645964

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -4,10 +4,9 @@ language:
4
  - en
5
  ---
6
 
7
- My own quantizations.
8
- output and embed tesnors quantized to f16.
9
  all other tensors quantized to q5_k or q6_k.
10
- the q8_0 version is pure (all tensors quantized to Q8_0 just for reference)
11
 
12
  Result:
13
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization
 
4
  - en
5
  ---
6
 
7
+ My own (ZeroWw) quantizations.
8
+ output and embed tensors quantized to f16.
9
  all other tensors quantized to q5_k or q6_k.
 
10
 
11
  Result:
12
  both f16.q6 and f16.q5 are smaller than q8_0 standard quantization