maddes8cht commited on
Commit
4d13376
1 Parent(s): c6596d5

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -24,7 +24,13 @@ I am continuously enhancing the structure of these model descriptions, and they
24
 
25
  # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
26
 
27
- As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer. While I am working diligently to make the updated models available for you, please be aware of the following:
 
 
 
 
 
 
28
 
29
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
30
 
@@ -106,7 +112,6 @@ OpenBuddy按“原样”提供,不附带任何种类的明示或暗示的保
106
  ## Please consider to support my work
107
  **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
108
 
109
-
110
  <center>
111
 
112
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
 
24
 
25
  # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
26
 
27
+ As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer, which impacts both the original Falcon models and their derived variants.
28
+
29
+ Here's what you need to know:
30
+
31
+ **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
32
+
33
+ **Derived Falcon Models:** It's important to note that the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. Therefore, the compatibility of these derived models with the new llama.cpp versions depends on the actions of the original model creators. So far, these models cannot be used in recent llama.cpp versions at all.
34
 
35
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
36
 
 
112
  ## Please consider to support my work
113
  **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
114
 
 
115
  <center>
116
 
117
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)