Exl2 Quants of tdrussell/Llama-3-70B-Instruct-Storywriter?
Thanks for the quants as always. I didn't know where else to contact you privately for a request, but is there any chance you'd be willing to do standard (2.4,4,5,6bpw etc.) exl2 quants of tdrussell/Llama-3-70B-Instruct-Storywriter? The GGUF results were promising prose-wise, and it would be really nice to see if there were improvements in output using an exl2 quant without the current llama3 problems plaguing llama.cpp at the moment.
I'll add it to the list. These 70B models are hideously heavy to generate exl2 quants, so they take a while to download, process and upload.
Saw it, thank you!
Hello again. If you get the chance to do so, could you please make the standard exl2 quants for: failspy/llama-3-70B-Instruct-abliterated?
Again, thank you for the quants thus far, the Storywriter quants you did earlier worked great.
Hello again. If you get the chance to do so, could you please make the standard exl2 quants for: failspy/llama-3-70B-Instruct-abliterated?
Again, thank you for the quants thus far, the Storywriter quants you did earlier worked great.
quants here: https://huggingface.co/models?search=LoneStriker/llama-3-70B-Instruct-abliterated