EXL2 quants of crestf411/llama-3-daybreak-storywriter-v0.2-70b-hf
2.50 bits per weight
3.00 bits per weight
3.50 bits per weight
4.00 bits per weight
4.50 bits per weight
5.00 bits per weight
6.00 bits per weight
8.00 bits per weight
Note: 2.0bpw produces a quant that does not work.
Daybreak (2024 May) v0.2 LoRA on top of https://huggingface.co/tdrussell/Llama-3-70B-Instruct-Storywriter
The venerable mradermacher has made GGUF quants available: GGUF, i1-GGUF
Beware, depraved. Not suitable for any audience.
Starting point (random anon recommendation):
- Typical P 0.98
- Min P 0.05
- Smoothing 0.24
- Dynamic temp 0.4 low 2.95 high
The LoRA is slop-free (but the base model is not, so complete elimination is very hard). The below regexes return 0 matches.
- 'barely above a whisper',
- 'shiver([s]?) down',
- ' ministration',
- 'audible (["'"]?)p[l]?op',
- 'buck([s]?) my ',
- 'buck([s]?) h[ei][rs] ',
- '[Dd]espite h[ie][mr]self',
- 'slick slit',
- 'whatever it takes',
If there are other phrases, please let me know. Also please let me know if there are phrases that this or other daybreak models use repeatedly/too often. I continuously update the dataset.
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.