Edit model card

Model Card for Model ID

A 4-bits double-quantized version of ernestoBocini/Phi3-mini-DPO-Tuned.

Model Details

This is a Phi-3-mini-4k-instruct fine-tuned with SFT and DPO on STEM domains, and finally quantized to a 4 bits precision, to serve the purpose of being an AI university tutor.

Quantization config used:

BitsAndBytesConfig(
   load_in_4bit=True,
   bnb_4bit_quant_type="nf4",
   bnb_4bit_use_double_quant=True,
   bnb_4bit_compute_dtype=torch.bfloat16
)
Downloads last month
5
Safetensors
Model size
2.07B params
Tensor type
F32
FP16
U8
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.