Uploaded model

  • Developed by: Agnuxo
  • License: apache-2.0
  • Finetuned from model: Agnuxo/Phi-3.5

This model was fine-tuned using Unsloth and Huggingface's TRL library.

Benchmark Results

This model has been fine-tuned for various tasks and evaluated on the following benchmarks:

GLUE_MRPC

Accuracy: 0.5784 F1: 0.6680

GLUE_MRPC Metrics

Model Size: 3,722,585,088 parameters Required Memory: 13.87 GB

For more details, visit my GitHub.

Thanks for your interest in this model!

Downloads last month
0
Safetensors
Model size
3.82B params
Tensor type
BF16
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Agnuxo/Phi-3.5-ORPO_tron-Instruct_CODE_Python_English_Asistant-16bit-v2

Adapter
(38)
this model

Datasets used to train Agnuxo/Phi-3.5-ORPO_tron-Instruct_CODE_Python_English_Asistant-16bit-v2