Just wanted to these the thesven/Llama3-8B-SFT-code_bagel-bnb-4bit model someone trained on a small subset on my code bagel dataset so i used gguf-my-repo to quantize it.

THIS IS NOT MY MODEL

Update: I tested it, I wasnt very impressed. Look forward to my own train coming in a little over a month.

https://x.com/dudeman6790/status/1793638914353508549z image/png

Downloads last month
1
GGUF
Model size
8.03B params
Architecture
llama

8-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Dataset used to train rombodawg/Llama3-8B-SFT-code_bagel-bnb-4bit-Q8_0-GGUF