GGFU Q8 Please.

#1
by Ransss - opened

I tested your models. and I like how they are special in the respond. you doing great work With your private data.

I tried to quantize it, using GGUF-my-repo, but it didn't work.

I will try to convert this one.

@Ransss Alright, Q8 on the way before the rest since that's the one you wanted.

Should be there in a few minutes.

https://huggingface.co/Lewdiculous/Orthocopter_8B-GGUF-Imatrix

Thank you.@Lewdiculous

Owner
β€’
edited May 10

I tried to quantize it, using GGUF-my-repo, but it didn't work.

Unfortunately that space does not run the bpe vocab command necessary to convert Llama 3 models. In the future, I suggest simply using llama.cpp on your local hardware or setting up a colab notebook with the appropriate scripts. Alternatively, you can wait for Lewdiculous, who I will contact for IQ quants if I deem the model worthwhile.

jeiku changed discussion status to closed
jeiku changed discussion status to open
Owner

@Lewdiculous thanks a bunch man, not sure how often I'll produce a new model, but may consider another one soon depending on leaderboard results.

Keep them coming.

[This discussion can be closed if you want.]

Sign up or log in to comment