Looking for GGUF?
Find multiple quantizations here: theprint/CodeThink-8B-GRPO-GGUF
Uploaded model
- Developed by: theprint
- License: apache-2.0
- Finetuned from model : theprint/Llama3.1-8B-CodeThink-16bit
This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.
- Downloads last month
- 6
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support