maddes8cht commited on
Commit
e09b7c5
1 Parent(s): 2e9fc74

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +7 -2
README.md CHANGED
@@ -12,7 +12,13 @@ I am continuously enhancing the structure of these model descriptions, and they
12
 
13
  # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
14
 
15
- As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer. While I am working diligently to make the updated models available for you, please be aware of the following:
 
 
 
 
 
 
16
 
17
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
18
 
@@ -75,7 +81,6 @@ What is a falcon? Can I keep one as a pet?
75
  ## Please consider to support my work
76
  **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
77
 
78
-
79
  <center>
80
 
81
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
 
12
 
13
  # Note: Important Update for Falcon Models in llama.cpp Versions After October 18, 2023
14
 
15
+ As noted on the [Llama.cpp]([ggerganov/llama.cpp: Port of Facebook's LLaMA model in C/C++ (github.com)](https://github.com/ggerganov/llama.cpp#hot-topics) GitHub repository, all new releases of Llama.cpp will require a re-quantization due to the implementation of the new BPE tokenizer, which impacts both the original Falcon models and their derived variants.
16
+
17
+ Here's what you need to know:
18
+
19
+ **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
20
+
21
+ **Derived Falcon Models:** It's important to note that the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. Therefore, the compatibility of these derived models with the new llama.cpp versions depends on the actions of the original model creators. So far, these models cannot be used in recent llama.cpp versions at all.
22
 
23
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
24
 
 
81
  ## Please consider to support my work
82
  **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
83
 
 
84
  <center>
85
 
86
  [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)