Edit model card

Model Description

This model was fine-tuned on top of the Youlln/ECE-Qwen0.5B-FT-V2 to improve its performance for specific tasks. After fine-tuning, an 8-bit quantization technique was applied using the bitsandbytes library. This process reduced the model size and optimized inference speed while maintaining a good level of accuracy. The model is suitable for environments where memory and computational efficiency are critical, such as edge devices or applications requiring faster response times.

Quantization was selectively applied, and some layers remain in float16 to ensure precision in key computations, balancing efficiency and accuracy

  • Developed by: Youri Lalain (@Youlln)
  • Organization: ECE engineering school
Downloads last month
10
Safetensors
Model size
494M params
Tensor type
F32
FP16
I8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Youlln/ECE-EIFFEL.ia-0.5B-FT-V2-Q8

Base model

Qwen/Qwen2.5-0.5B
Quantized
(2)
this model