medalpaca GPTQ 4bit?

#2
by yhyhy3 - opened

First off, thank you so much for doing so much work to convert these llama models for the community! It's been such a bit help in decreasing cost and increasing speed by making the LLMs fit on my GPU in my projects.

Would you consider also generating a GPTQ 4bit version of the medalpaca model (llama finetuned on a bunch of medical datasets + alpaca) - https://huggingface.co/medalpaca/medalpaca-13b - as well? Thank you so much in advance!

OK I will have a look!

Thank you so much!

TheBloke changed discussion status to closed

Omg thank you so much!

Sign up or log in to comment