Just wanted to these the thesven/Llama3-8B-SFT-code_bagel-bnb-4bit model someone trained on a small subset on my code bagel dataset so i used gguf-my-repo to quantize it.
THIS IS NOT MY MODEL
Update: I tested it, I wasnt very impressed. Look forward to my own train coming in a little over a month.
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model’s pipeline type.
Dataset used to train
rombodawg/Llama3-8B-SFT-code_bagel-bnb-4bit-Q8_0-GGUF