Uploaded model
- Developed by: Agnuxo(https://github.com/Agnuxo1)
- License: apache-2.0
- Finetuned from model : Agnuxo/Mistral-NeMo-Minitron-8B-Base-Nebulal
This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.
Benchmark Results
This model has been fine-tuned for various tasks and evaluated on the following benchmarks:
Model Size: 3,821,079,552 parameters Required Memory: 14.23 GB
For more details, visit my GitHub.
Thanks for your interest in this model!
- Downloads last month
- 0
Inference API (serverless) does not yet support adapter-transformers models for this pipeline type.
Model tree for Agnuxo/Phi-3.5-mini-instruct-python_coding_assistant_16bit
Unable to build the model tree, the base model loops to the model itself. Learn more.