Uploaded model

  • Developed by: Agnuxo
  • License: apache-2.0
  • Finetuned from model : Agnuxo/Tinytron-TinyLlama

This Mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Benchmark Results

This model has been fine-tuned for various tasks and evaluated on the following benchmarks:

glue-sst2

Accuracy: 0.5183

glue-sst2 Accuracy

Model Size: 1,034,516,480 parameters Required Memory: 3.85 GB

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
0
Safetensors
Model size
1.1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Agnuxo/Tinytron-TinyLlama-Instruct_CODE_Python_Spanish_English_Asistant-16bit-v2

Finetunes
2 models
Quantizations
5 models