DavidAU commited on
Commit
6282909
·
verified ·
1 Parent(s): db0dc26

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -1
README.md CHANGED
@@ -37,7 +37,6 @@ pipeline_tag: text-generation
37
  <B>L3-Dark-Planet-8B-GGUF - Updates Dec 21 2024: (uploading quants ... refreshed, and new quants):</B>
38
  - All quants have been "refreshed", quanted with the lastest LLAMACPP improvements : Better instruction following, output generation across all quants.
39
  - All quants have also been upgraded with "more bits" for output tensor (all set at Q8_0) and embed for better performance (this is in addition to the "refresh")
40
- - New "ARM" quants have been added for machines than can run them and output tensor set at Q8_0. (format: ".../Q4_0_4_4.gguf")
41
  - New specialized quants (in addition to the new refresh/upgrades): "max, max-cpu" (will include this in the file name) for quants "Q2K" (max cpu only), "IQ4_XS", "Q6_K" and "Q8_0"
42
  - "MAX": output tensor / embed at float 16. (better instruction following/output generation than standard quants)
43
  - "MAX-CPU": output tensor / embed at bfloat 16, which forces these on to the CPU (Nvidia cards / other will vary), this frees up vram at cost of token/second and you get better instruction following/output generation too.
 
37
  <B>L3-Dark-Planet-8B-GGUF - Updates Dec 21 2024: (uploading quants ... refreshed, and new quants):</B>
38
  - All quants have been "refreshed", quanted with the lastest LLAMACPP improvements : Better instruction following, output generation across all quants.
39
  - All quants have also been upgraded with "more bits" for output tensor (all set at Q8_0) and embed for better performance (this is in addition to the "refresh")
 
40
  - New specialized quants (in addition to the new refresh/upgrades): "max, max-cpu" (will include this in the file name) for quants "Q2K" (max cpu only), "IQ4_XS", "Q6_K" and "Q8_0"
41
  - "MAX": output tensor / embed at float 16. (better instruction following/output generation than standard quants)
42
  - "MAX-CPU": output tensor / embed at bfloat 16, which forces these on to the CPU (Nvidia cards / other will vary), this frees up vram at cost of token/second and you get better instruction following/output generation too.