maddes8cht commited on
Commit
9ac575b
1 Parent(s): 385b737

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +1 -3
README.md CHANGED
@@ -31,7 +31,7 @@ Here's what you need to know:
31
 
32
  **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
33
 
34
- **Derived Falcon Models:** It's important to note that the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. Therefore, the compatibility of these derived models with the new llama.cpp versions depends on the actions of the original model creators. So far, these models cannot be used in recent llama.cpp versions at all.
35
 
36
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
37
 
@@ -46,8 +46,6 @@ As a solo operator of this page, I'm doing my best to expedite the process, but
46
 
47
 
48
 
49
-
50
-
51
  # About GGUF format
52
 
53
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
 
31
 
32
  **Original Falcon Models:** I am diligently working to provide updated quantized versions of the four original Falcon models to ensure their compatibility with the new llama.cpp versions. Please keep an eye on my Hugging Face Model pages for updates on the availability of these models. Promptly downloading them is essential to maintain compatibility with the latest llama.cpp releases.
33
 
34
+ **Derived Falcon Models:** Right now, the derived Falcon-Models cannot be re-converted without adjustments from the original model creators. So far, these models cannot be used in recent llama.cpp versions at all. ** Good news! It's in the pipeline that the capability for quantizing even the older derived Falcon models will be incorporated soon. However, the exact timeline is beyond my control.
35
 
36
  **Stay Informed:** Application software using llama.cpp libraries will follow soon. Keep an eye on the release schedules of your favorite software applications that rely on llama.cpp. They will likely provide instructions on how to integrate the new models.
37
 
 
46
 
47
 
48
 
 
 
49
  # About GGUF format
50
 
51
  `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.