GGUF for the 236B model

#4
by amarmir - opened

Any plans for the GGUF of the 236B parameters model?

LM Studio Community org

Not onto this page, we want to keep this to models that the average user should run, and even at Q2 (the absolute bare minimum people should run) the 236B model is 86GB, which almost no one will be able to run, so we don't want to recommend it

You can find some quants here if you do have the specs for it: https://huggingface.co/bartowski/DeepSeek-Coder-V2-Instruct-GGUF

Thanks for your reply.
I'm running a machine with an A100 GPU, and also have 220GB of RAM on this machine.
Do you think I should be able to run the following?:
https://huggingface.co/bartowski/DeepSeek-Coder-V2-Instruct-GGUF/tree/main/DeepSeek-Coder-V2-Instruct-Q4_K_M.gguf

Not onto this page, we want to keep this to models that the average user should run, and even at Q2 (the absolute bare minimum people should run) the 236B model is 86GB, which almost no one will be able to run, so we don't want to recommend it

You can find some quants here if you do have the specs for it: https://huggingface.co/bartowski/DeepSeek-Coder-V2-Instruct-GGUF

https://pastebin.com/wRVCpcep [hardware specs (i need to update this to p40's) maybe, but I'm crazy enough to try]

Sign up or log in to comment