File size: 4,887 Bytes
274a2ed
 
 
5d97f7c
 
33461f8
5d97f7c
 
 
 
 
33461f8
93bdb76
33461f8
9f3e818
33461f8
9f3e818
33461f8
 
 
 
 
 
 
 
26bbe30
93bdb76
 
 
5d97f7c
 
 
 
 
 
 
 
 
 
b519fad
5d97f7c
 
 
 
 
 
 
 
 
 
 
 
 
9ea6e1e
b519fad
9ea6e1e
5d97f7c
 
 
 
 
 
 
 
 
 
274a2ed
5d97f7c
 
 
 
274a2ed
5ebc1ac
 
 
b519fad
5ebc1ac
c2433e2
 
 
 
93bdb76
bd6f7bb
5d97f7c
 
 
 
 
bd6f7bb
5d97f7c
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: apache-2.0
---
[![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()

I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information

# WizardLM-Uncensored-Falcon-40b - GGUF
- Model creator: [ehartford](https://huggingface.co/ehartford)
- Original model: [WizardLM-Uncensored-Falcon-40b](https://huggingface.co/ehartford/WizardLM-Uncensored-Falcon-40b)

# Important Update for Falcon Models in llama.cpp Versions After October 18, 2023

As noted on the [Llama.cpp GitHub repository](https://github.com/ggerganov/llama.cpp#hot-topics), all new Llama.cpp releases after October 18, 2023, will require a re-quantization due to the new BPE tokenizer.

**Good news!** I am glad that my re-quantization process for Falcon Models is nearly complete. Download the latest quantized models to ensure compatibility with recent llama.cpp software.

**Key Points:**

- **Stay Informed:** Keep an eye on software application release schedules using llama.cpp libraries.
- **Monitor Upload Times:** Re-quantization is *almost* done. Watch for updates on my Hugging Face Model pages.

**Important Compatibility Note:** Old software will work with old Falcon models, but expect updated software to exclusively support the new models.

This change primarily affects **Falcon** and **Starcoder** models, with other models remaining unaffected.




# About GGUF format

`gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
A growing list of Software is using it and can therefore use this model.
The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov

# Quantization variants

There is a bunch of quantized files available. How to choose the best for you:

# Legacy quants

Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
Falcon 7B models cannot be quantized to K-quants.

# K-quants

K-quants are based on the idea that the quantization of certain parts affects the quality in different ways. If you quantize certain parts more and others less, you get a more powerful model with the same file size, or a smaller file size and lower memory load with comparable performance.
So, if possible, use K-quants.
With a Q6_K you should find it really hard to find a quality difference to the original model - ask your model two times the same question and you may encounter bigger quality differences.




---

# Original Model Card:
This is WizardLM trained on top of tiiuae/falcon-40b, with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.

Shout out to the open source AI/ML community, and everyone who helped me out.

Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car. Publishing anything this model generates is the same as publishing it yourself. You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.

Prompt format is WizardLM.

```
What is a falcon?  Can I keep one as a pet?
### Response:
```

Thank you [chirper.ai](https://chirper.ai) for sponsoring some of my compute!

***End of original Model File***
---


## Please consider to support my work
**Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.

<center>

[![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
[![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
[![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
[![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
[![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)

</center>