GGUF
maddes8cht commited on
Commit
a44b8ee
1 Parent(s): 8ae4005

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -20,7 +20,13 @@ I am continuously enhancing the structure of these model descriptions, and they
20
 
21
  # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
22
 
23
- As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer. While I am working diligently to make the updated models available for you, please be aware of the following:
 
 
 
 
 
 
24
 
25
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
26
 
@@ -318,7 +324,6 @@ falconllm@tii.ae
318
  ## Please consider to support my work
319
  **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
320
 
321
-
322
  <center>
323
 
324
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
 
20
 
21
  # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
22
 
23
+ As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer, which impacts both the original Falcon models and their derived variants.
24
+
25
+ Here's what you need to know:
26
+
27
+ **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
28
+
29
+ **Derived Falcon Models:** It's important to note that the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. Therefore, the compatibility of these derived models with the new llama.cpp versions depends on the actions of the original model creators. So far, these models cannot be used in recent llama.cpp versions at all.
30
 
31
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
32
 
 
324
  ## Please consider to support my work
325
  **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
326
 
 
327
  <center>
328
 
329
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)