maddes8cht commited on
Commit
27fd325
1 Parent(s): 5dd17e7

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +19 -10
README.md CHANGED
@@ -13,20 +13,27 @@ I'm constantly enhancing these model descriptions to provide you with the most r
13
  - Model creator: [ehartford](https://huggingface.co/ehartford)
14
  - Original model: [samantha-falcon-7b](https://huggingface.co/ehartford/samantha-falcon-7b)
15
 
 
 
 
 
 
 
 
 
 
16
  # Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
17
 
18
- As noted on the [Llama.cpp GitHub repository](https://github.com/ggerganov/llama.cpp#hot-topics), all new Llama.cpp releases after October 18, 2023, will require a re-quantization due to the new BPE tokenizer.
19
 
20
- **Good news!** I am glad that my re-quantization process for Falcon Models is nearly complete. Download the latest quantized models to ensure compatibility with recent llama.cpp software.
21
 
22
  **Key Points:**
23
 
24
  - **Stay Informed:** Keep an eye on software application release schedules using llama.cpp libraries.
25
- - **Monitor Upload Times:** Re-quantization is *almost* done. Watch for updates on my Hugging Face Model pages.
26
-
27
- **Important Compatibility Note:** Old software will work with old Falcon models, but expect updated software to exclusively support the new models.
28
 
29
- This change primarily affects **Falcon** and **Starcoder** models, with other models remaining unaffected.
30
 
31
 
32
 
@@ -39,19 +46,21 @@ The core project making use of the ggml library is the [llama.cpp](https://githu
39
 
40
  # Quantization variants
41
 
42
- There is a bunch of quantized files available. How to choose the best for you:
43
 
44
  # Legacy quants
45
 
46
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
47
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
48
- Falcon 7B models cannot be quantized to K-quants.
 
 
49
 
50
  # K-quants
51
 
52
- K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
53
  So, if possible, use K-quants.
54
- With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
55
 
56
 
57
 
 
13
  - Model creator: [ehartford](https://huggingface.co/ehartford)
14
  - Original model: [samantha-falcon-7b](https://huggingface.co/ehartford/samantha-falcon-7b)
15
 
16
+ # K-Quants in Falcon 7b models
17
+
18
+ New Llama.cpp releases now allow for K-quantization of models that were previously incompatible with K-quants. This is achieved by employing a fallback solution for model layers that cannot be accurately quantized with K-quants.
19
+
20
+ For Falcon 7B models, although only a quarter of the layers can be quantized with true K-quants, this approach still benefits from utilizing various legacy quantization types, such as Q4_0, Q4_1, Q5_0, and Q5_1. As a result, it offers better quality at the same file size or smaller file sizes with comparable performance.
21
+
22
+ So this solution ensures improved performance and efficiency over legacy Q4_0, Q4_1, Q5_0 and Q5_1 Quantizations.
23
+
24
+
25
  # Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
26
 
27
+ As previously noted on the [Llama.cpp GitHub repository](https://github.com/ggerganov/llama.cpp#hot-topics), all new Llama.cpp releases after October 18, 2023, required re-quantization due to the implementation of the new BPE tokenizer.
28
 
29
+ **Update:** The re-quantization process for Falcon Models is now complete, and the latest quantized models are available for download. To ensure continued compatibility with recent llama.cpp software, You need to update your Falcon models.
30
 
31
  **Key Points:**
32
 
33
  - **Stay Informed:** Keep an eye on software application release schedules using llama.cpp libraries.
34
+ - **Monitor Upload Times:** Re-quantization is complete. Watch for updates on my Hugging Face Model pages.
 
 
35
 
36
+ This change primarily affects **Falcon** and **Starcoder** models, with other models remaining unaffected. If you haven't already, please update your Falcon models for seamless compatibility with the latest llama.cpp versions.
37
 
38
 
39
 
 
46
 
47
  # Quantization variants
48
 
49
+ There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
50
 
51
  # Legacy quants
52
 
53
  Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
54
  Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
55
+ ## Note:
56
+ Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
57
+ (This mainly refers to Falcon 7b and Starcoder models)
58
 
59
  # K-quants
60
 
61
+ K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
62
  So, if possible, use K-quants.
63
+ With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
64
 
65
 
66