maddes8cht commited on
Commit
2872e81
1 Parent(s): 3530baa

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - api
7
+ datasets:
8
+ - gorilla-llm/APIBench
9
+ ---
10
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
11
+
12
+ I am continuously enhancing the structure of these model descriptions, and they now provide even more comprehensive information to help you find the best models for your specific needs.
13
+
14
+
15
+ # gorilla-falcon-7b-hf-v0 - GGUF
16
+ - Model creator: [gorilla-llm](https://huggingface.co/gorilla-llm)
17
+ - Original model: [gorilla-falcon-7b-hf-v0](https://huggingface.co/gorilla-llm/gorilla-falcon-7b-hf-v0)
18
+
19
+ # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
20
+
21
+ As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer, which impacts both the original Falcon models and their derived variants.
22
+
23
+ Here's what you need to know:
24
+
25
+ **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
26
+
27
+ **Derived Falcon Models:** Right now, the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. So far, these models cannot be used in recent llama.cpp versions at all. ** Good news!** It's in the pipeline that the capability for quantizing even the older derived Falcon models will be incorporated soon. However, the exact timeline is beyond my control.
28
+
29
+ **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
30
+
31
+ **Monitor Upload Times:** Please keep a close watch on the upload times of the available files on my Hugging Face Model pages. This will help you identify which files have already been updated and are ready for download, ensuring you have the most current Falcon models at your disposal.
32
+
33
+ **Download Promptly:** Once the updated Falcon models are available on my Hugging Face page, be sure to download them promptly to ensure compatibility with the latest [llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp) versions.
34
+
35
+ Please understand that this change specifically affects Falcon and Starcoder models, other models remain unaffected. Consequently, software providers may not emphasize this change as prominently.
36
+
37
+ As a solo operator of this page, I'm doing my best to expedite the process, but please bear with me as this may take some time.
38
+
39
+
40
+
41
+
42
+ # About GGUF format
43
+
44
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
45
+ A growing list of Software is using it and can therefore use this model.
46
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
47
+
48
+ # Quantization variants
49
+
50
+ There is a bunch of quantized files available. How to choose the best for you:
51
+
52
+ # Legacy quants
53
+
54
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
55
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
56
+ Falcon 7B models cannot be quantized to K-quants.
57
+
58
+ # K-quants
59
+
60
+ K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
61
+ So, if possible, use K-quants.
62
+ With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.
63
+
64
+
65
+
66
+ ---
67
+ # Original Model Card:
68
+ license: apache-2.0
69
+ ---
70
+
71
+ ***End of original Model File***
72
+ ---
73
+
74
+
75
+ ## Please consider to support my work
76
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
77
+
78
+ <center>
79
+
80
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
81
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
82
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
83
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
84
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
85
+
86
+ </center>