model request
Please help me with gguf for this model - EpistemeAI/EpistemeAI-codegemma-2-9b-ultra
Hi, the model has already been queued, but I have bumped its priority. You can watch its progress through the queue here: http://hf.tst.eu/status.html
Cheers!
Thank you
can you please help me for this mode too? EpistemeAI/Fireball-Llama-3.11-8B-v1orpo
sure, it's queued and should be done in a few hours. you can also help me by providing the full url next time, btw. :)
Thanks a bunch
Please help me with this model for gguf
EpistemeAI/Fireball-MathCoder-Llama-3.1-8B-v1dpo-16bitA
Queued (without the "A" at the end)
Thanks for the typo fix! You are the best!
I have one more, please. Thank you
EpistemeAI/Fireball-Llama-3.1-8B-Instruct-v1-16bit
Queued!
Thanks
Request mode:
EpistemeAI/Fireball-Llama-3.1-8B-Instruct-v1dpo
and
EpistemeAI/Fireball-Nemo-Base-2407-sft-v1
Thank you so much
nemo is queued, but the llama one is not supported by llama.cpp because it is already quantized
request gguf for this one, I really appreciate it
EpistemeAI/Fireball-Mistral-Nemo-Base-2407-V2
Sorry, pulled to
EpistemeAI/Fireball-Mistral-Nemo-Base-2407-V2 changed to EpistemeAI/Fireball-12B
Queued!
Thanks
request gguf for
- EpistemeAI/Fireball-MathMistral-Nemo-Base-2407-v2dpo
- EpistemeAI/Fireball-MathMistral-Nemo-Base-2407
Queued! Actually a wile ago, but I bumped its priority.,
Kindly add gguf for these:
EpistemeAI/Fireball-12B-v1.0
EpistemeAI/Fireball-12B-v1.0-finance
Thanks
done!
thanks
please add, thanks
EpistemeAI/Athena-codegemma-2-9b-v1
done
I changed names mradermacher/Athene-Phi-3.5-instruct-GGUF -> EpistemeAI/Athene-Phi-3.5-mini-instruct-orpo-GGUF
I've renamed the repos
Thanks.
I also moved the organizations: from EpistemeAI/Fireball-MathMistral-Nemo-Base-2407-v2dpo -> EpistemeAI2/Fireball-MathMistral-Nemo-Base-2407-v2dpo
I have new model , model reques t- EpistemeAI/Fireball-12B-v1.2
I moved EpistemeAI/Fireball-12B-v1.2 to EpistemeAI2/Fireball-12B-v1.2