GGUF
sft
maddes8cht commited on
Commit
a707793
1 Parent(s): 68eb178

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -32,13 +32,12 @@ So this solution ensures improved performance and efficiency over legacy Q4_0, Q
32
 
33
  ---
34
  # Brief
35
- I have a problem with the OpenAssistant falcon *sft* models
36
 
37
  * [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
38
  * [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
39
  * [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
40
 
41
- which currently prevents me from re-quantizing these models. It is not clear to me at the moment if this problem can be solved.
42
 
43
 
44
  ---
 
32
 
33
  ---
34
  # Brief
35
+ Finally got the OpenAssistant falcon *sft* models working again
36
 
37
  * [falcon-7b-sft-top1-696](https://huggingface.co/OpenAssistant/falcon-7b-sft-top1-696)
38
  * [falcon-40b-sft-top1-560](https://huggingface.co/OpenAssistant/falcon-40b-sft-top1-560)
39
  * [falcon-40b-sft-mix-1226](https://huggingface.co/OpenAssistant/falcon-40b-sft-mix-1226)
40
 
 
41
 
42
 
43
  ---